gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-54#paper-1096#slide-7
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-7
Cleaning
Filter out non-English pages. Remove non-linguistic content: buttons, URLs... Split to paragraphs, join to single lines, remove duplicates. Sampled 571 paragraphs from each, for comparable size.
Filter out non-English pages. Remove non-linguistic content: buttons, URLs... Split to paragraphs, join to single lines, remove duplicates. Sampled 571 paragraphs from each, for comparable size.
[]
GEM-SciDuet-train-54#paper-1096#slide-8
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-8
Domain Differences Vocabulary
Distance between word distributions, measured by: to the of is and Small self-distances by splitting each in half, but the different domains are about equidistant.
Distance between word distributions, measured by: to the of is and Small self-distances by splitting each in half, but the different domains are about equidistant.
[]
GEM-SciDuet-train-54#paper-1096#slide-9
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-9
Domain Differences Characteristics of Darknet Data
Legal and illegal Onion should be considered different domains. Diverse: sub-domains are distinguishable. Unique: distinguishable from other domains.
Legal and illegal Onion should be considered different domains. Diverse: sub-domains are distinguishable. Unique: distinguishable from other domains.
[]
GEM-SciDuet-train-54#paper-1096#slide-10
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-10
Domain Differences Named Entities and Wikification
NE extraction [spaCy] + Wikification [Bunescu and Pasca, 2006]. % (of detected NEs) Wikifiable eBay By manual inspection, low NE precision and recall for Illegal Onion. Slang words for drugs (e.g., kush) falsely picked up as NEs. Standard NLP is not suited for this domain.
NE extraction [spaCy] + Wikification [Bunescu and Pasca, 2006]. % (of detected NEs) Wikifiable eBay By manual inspection, low NE precision and recall for Illegal Onion. Slang words for drugs (e.g., kush) falsely picked up as NEs. Standard NLP is not suited for this domain.
[]
GEM-SciDuet-train-54#paper-1096#slide-11
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-11
Classification Classes
We identified three domains. Two binary classification settings: eBay, Legal Onion Legal Onion, Illegal Onion What are the linguistic features distinguishing them?
We identified three domains. Two binary classification settings: eBay, Legal Onion Legal Onion, Illegal Onion What are the linguistic features distinguishing them?
[]
GEM-SciDuet-train-54#paper-1096#slide-12
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-12
Classification Classifiers
NB: Naive Bayes (bag of words) SVM: Support Vector Machine BoE: sum/average GloVe + MLP seq2vec: BiLSTM + MLP attention: ELMo + BCN (self-attention)
NB: Naive Bayes (bag of words) SVM: Support Vector Machine BoE: sum/average GloVe + MLP seq2vec: BiLSTM + MLP attention: ELMo + BCN (self-attention)
[]
GEM-SciDuet-train-54#paper-1096#slide-13
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-13
Classification Manipulations
To find what linguistic cues are used for classification. Conditions: Drop content words Replace content words with their POS Drop function words Replace function words with their POS Content POS: {adj, adv, noun, propn, verb, x, num} Generic Viagra ( Oral Jelly ) is used for Erectile Dysfunction propn propn propn propn verb verb for propn propn Welcome to SnowKings Good Quality Cocaine verb to propn propn propn propn
To find what linguistic cues are used for classification. Conditions: Drop content words Replace content words with their POS Drop function words Replace function words with their POS Content POS: {adj, adv, noun, propn, verb, x, num} Generic Viagra ( Oral Jelly ) is used for Erectile Dysfunction propn propn propn propn verb verb for propn propn Welcome to SnowKings Good Quality Cocaine verb to propn propn propn propn
[]
GEM-SciDuet-train-54#paper-1096#slide-14
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-14
Classification Results
eBay vs. Legal Onion drugs: Illegal Onion drugs: full drop content drop function pos content pos function
eBay vs. Legal Onion drugs: Illegal Onion drugs: full drop content drop function pos content pos function
[]
GEM-SciDuet-train-54#paper-1096#slide-15
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-15
Cross Domain Classification Darknet Forums
Can we generalize beyond drugs? DUTA-10K also contain Legal Forums and Illegal Forums.
Can we generalize beyond drugs? DUTA-10K also contain Legal Forums and Illegal Forums.
[]
GEM-SciDuet-train-54#paper-1096#slide-16
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-16
Cross Domain Classification Results
Illegal Onion forums: full drop content drop function pos content pos function Trained on drugs, evaluated on forums (Legal vs. Illegal): full drop content drop function pos content pos function
Illegal Onion forums: full drop content drop function pos content pos function Trained on drugs, evaluated on forums (Legal vs. Illegal): full drop content drop function pos content pos function
[]
GEM-SciDuet-train-54#paper-1096#slide-17
1096
The Language of Legal and Illegal Activity on the Darknet
The non-indexed parts of the Internet (the Darknet) have become a haven for both legal and illegal anonymous activity. Given the magnitude of these networks, scalably monitoring their activity necessarily relies on automated tools, and notably on NLP tools. However, little is known about what characteristics texts communicated through the Darknet have, and how well off-the-shelf NLP tools do on this domain. This paper tackles this gap and performs an in-depth investigation of the characteristics of legal and illegal text in the Darknet, comparing it to a clear net website with similar content as a control condition. Taking drug-related websites as a test case, we find that texts for selling legal and illegal drugs have several linguistic characteristics that distinguish them from one another, as well as from the control condition, among them the distribution of POS tags, and the coverage of their named entities in Wikipedia. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189 ], "paper_content_text": [ "Introduction The term \"Darknet\" refers to the subset of Internet sites and pages that are not indexed by search engines.", "The Darknet is often associated with the \".onion\" top-level domain, whose websites are referred to as \"Onion sites\", and are reachable via the Tor network anonymously.", "Under the cloak of anonymity, the Darknet harbors much illegal activity (Moore and Rid, 2016) .", "Applying NLP tools to text from the Darknet is thus important for effective law enforcement and intelligence.", "However, little is known about the characteristics of the language used in the Darknet, and specifically on what distinguishes text on websites that conduct legal and illegal activity.", "In * Equal contribution 1 Our code can be found in https://github.com/ huji-nlp/cyber.", "Our data is available upon request.", "fact, the only work we are aware of that classified Darknet texts into legal and illegal activity is Avarikioti et al.", "(2018) , but they too did not investigate in what ways these two classes differ.", "This paper addresses this gap, and studies the distinguishing features between legal and illegal texts in Onion sites, taking sites that advertise drugs as a test case.", "We compare our results to a control condition of texts from eBay 2 pages that advertise products corresponding to drug keywords.", "We find a number of distinguishing features.", "First, we confirm the results of Avarikioti et al.", "(2018) , that text from legal and illegal pages (henceforth, legal and illegal texts) can be distinguished based on the identity of the content words (bag-of-words) in about 90% accuracy over a balanced sample.", "Second, we find that the distribution of POS tags in the documents is a strong cue for distinguishing between the classes (about 71% accuracy).", "This indicates that the two classes are different in terms of their syntactic structure.", "Third, we find that legal and illegal texts are roughly as distinguishable from one another as legal texts and eBay pages are (both in terms of their words and their POS tags).", "The latter point suggests that legal and illegal texts can be considered distinct domains, which explains why they can be automatically classified, but also implies that applying NLP tools to Darknet texts is likely to face the obstacles of domain adaptation.", "Indeed, we show that named entities in illegal pages are covered less well by Wikipedia, i.e., Wikification works less well on them.", "This suggests that for high-performance text understanding, specialized knowledge bases and tools may be needed for processing texts from the Darknet.", "By experimenting on a different domain in Tor (user-generated content), we show that the legal/illegal distinction generalizes across domains.", "After discussing previous works in Section 2, we detail the datasets used in Section 3.", "Differences in the vocabulary and named entities between the classes are analyzed in Section 4, before the presentation of the classification experiments (Section 5).", "Section 6 presents additional experiments, which explore cross-domain classification.", "We further analyze and discuss the findings in Section 7.", "Related Work The detection of illegal activities on the Web is sometimes derived from a more general topic classification.", "For example, Biryukov et al.", "(2014) classified the content of Tor hidden services into 18 topical categories, only some of which correlate with illegal activity.", "Graczyk and Kinningham (2015) combined unsupervised feature selection and an SVM classifier for the classification of drug sales in an anonymous marketplace.", "While these works classified Tor texts into classes, they did not directly address the legal/illegal distinction.", "Some works directly addressed a specific type of illegality and a particular communication context.", "Morris and Hirst (2012) used an SVM classifier to identify sexual predators in chatting message systems.", "The model includes both lexical features, including emoticons, and behavioral features that correspond to conversational patterns.", "Another example is the detection of pedophile activity in peer-to-peer networks (Latapy et al., 2013) , where a predefined list of keywords was used to detect child-pornography queries.", "Besides lexical features, we here consider other general linguistic properties, such as syntactic structure.", "Al Nabki et al.", "(2017) presented DUTA (Darknet Usage Text Addresses), the first publicly available Darknet dataset, together with a manual classification into topical categories and subcategories.", "For some of the categories, legal and illegal activities are distinguished.", "However, the automatic classification presented in their work focuses on the distinction between different classes of illegal activity, without addressing the distinction between legal and illegal ones, which is the subject of the present paper.", "Al Nabki et al.", "(2019) extended the dataset to form DUTA-10K, which we use here.", "Their results show that 20% of the hidden services correspond to \"suspicious\" activities.", "The analysis was conducted using the text classifier presented in Al Nabki et al.", "(2017) and manual verification.", "Recently, Avarikioti et al.", "(2018) presented another topic classification of text from Tor together with a first classification into legal and illegal activities.", "The experiments were performed on a newly crawled corpus obtained by recursive search.", "The legal/illegal classification was done using an SVM classifier in an active learning setting with bag-of-words features.", "Legality was assessed in a conservative way where illegality is assigned if the purpose of the content is an obviously illegal action, even if the content might be technically legal.", "They found that a linear kernel worked best and reported an F1 score of 85% and an accuracy of 89%.", "Using the dataset of Al Nabki et al.", "(2019) , and focusing on specific topical categories, we here confirm the importance of content words in the classification, and explore the linguistic dimensions supporting classification into legal and illegal texts.", "Datasets Used Onion corpus.", "We experiment with data from Darknet websites containing legal and illegal activity, all from the DUTA-10K corpus (Al Nabki et al., 2019) .", "We selected the \"drugs\" sub-domain as a test case, as it is a large domain in the corpus, that has a \"legal\" and \"illegal\" sub-categories, and where the distinction between them can be reliably made.", "These websites advertise and sell drugs, often to international customers.", "While legal websites often sell pharmaceuticals, illegal ones are often related to substance abuse.", "These pages are directed by sellers to their customers.", "eBay corpus.", "As an additional dataset of similar size and characteristics, but from a clear net source, and of legal nature, we compiled a corpus of eBay pages.", "eBay is one of the largest hosting sites for retail sellers of various goods.", "Our corpus contains 118 item descriptions, each consisting of more than one sentence.", "Item descriptions vary in price, item sold and seller.", "The descriptions were selected by searching eBay for drug related terms, 3 and selecting search patterns to avoid over-repetition.", "For example, where many sell the same product, only one example was added to the corpus.", "Search queries also included filtering for price, so that each query resulted with different items.", "Either because of advertisement strategies or the geographical dispersion of the sellers, the eBay corpus contains formal as well as informal language, and some item descriptions contain abbreviations and slang.", "Importantly, eBay websites are assumed to conduct legal activity-even when discussing drug-related material, we find it is never the sale of illegal drugs but rather merchandise, tools, or otherwise related content.", "Cleaning.", "As preprocessing for all experiments, we apply some cleaning to the text of web pages in our corpora.", "HTML markup is already removed in the original datasets, but much non-linguistic content remains, such as buttons, encryption keys, metadata and URLs.", "We remove such text from the web pages, and join paragraphs to single lines (as newlines are sometimes present in the original dataset for display purposes only).", "We then remove any duplicate paragraphs, where paragraphs are considered identical if they share all but numbers (to avoid an over-representation of some remaining surrounding text from the websites, e.g.", "\"Showing all 9 results\").", "Domain Differences As pointed out by Plank (2011) , there is no common ground as to what constitutes a domain.", "Domain differences are attributed in some works to differences in vocabulary (Blitzer et al., 2006) and in other works to differences in style, genre and medium (McClosky, 2010) .", "While here we adopt an existing classification, based on the DUTA-10K corpus, we show in which way and to what extent it translates to distinct properties of the texts.", "This question bears on the possibility of distinguishing between legal and illegal drug-related websites based on their text alone (i.e., without recourse to additional information, such as meta-data or network structure).", "We examine two types of domain differences between legal and illegal texts: vocabulary differences and named entities.", "Vocabulary Differences To quantify the domain differences between texts from legal and illegal texts, we compute the frequency distribution of words in the eBay corpus, the legal and illegal drugs Onion corpora, and the entire Onion drug section (All Onion).", "Since any two sets of texts are bound to show some disparity between them, we compare the differences between domains to a control setting, where we randomly split each examined corpus into two halves, and compute the frequency distribution of each of them.", "The inner consistency of each corpus, defined as the similarity of distributions between the two halves, serves as a reference point for the similarity between domains.", "We refer to this measure as \"self-distance\".", "Following Plank and van Noord (2011) , we compute the Jensen-Shannon divergence and Variational distance (also known as L1 or Manhattan) as the comparison measures between the word frequency histograms.", "pora lies between 0.40 to 0.45 by the Jensen-Shannon divergence, but the distance between each pair is 0.60 to 0.65, with the three approximately forming an equilateral triangle in the space of word distributions.", "Similar results are obtained using Variational distance, and are omitted for brevity.", "These results suggest that rather than regarding all drug-related Onion texts as one domain, with legal and illegal texts as sub-domains, they should be treated as distinct domains.", "Therefore, using Onion data to characterize the differences between illegal and legal linguistic attributes is sensible.", "In fact, it is more sensible than comparing Illegal Onion to eBay text, as there the legal/illegal distinction may be confounded by the differences between eBay and Onion data.", "Differences in Named Entities In order to analyze the difference in the distribution of named entities between the domains, we used a Wikification technique (Bunescu and Paşca, 2006) , i.e., linking entities to their corresponding article in Wikipedia.", "Using spaCy's 4 named entity recognition, we first extract all named entity mentions from all the corpora.", "5 We then search for relevant Wikipedia entries for each named entity using the DBpedia Ontology API (Daiber et al., 2013) .", "For each domain we compute the total number of named entities and the percentage with corresponding Wikipedia articles.", "The results were obtained by averaging the percentage of wikifiable named entities in each site per domain.", "We also report the standard error for each average.", "According to our results (Table 2) , the Wikification success ratios of eBay and Illegal 4 https://spacy.io 5 We use all named entity types provided by spaCy (and not only \"Product\") to get a broader perspective on the differences between the domains in terms of their named entities.", "For example, the named entity \"Peru\" (of type \"Geopolitical Entity\") appears multiple times in Onion sites and is meant to imply the quality of a drug.", "Onion named entities is comparable and relatively low.", "However, sites selling legal drugs on Onion have a much higher Wikification percentage.", "Presumably the named entities in Onion sites selling legal drugs are more easily found in public databases such as Wikipedia because they are mainly well-known names for legal pharmaceuticals.", "However, in both Illegal Onion and eBay sites, the list of named entities includes many slang terms for illicit drugs and paraphernalia.", "These slang terms are usually not well known by the general public, and are therefore less likely to be covered by Wikipedia and similar public databases.", "In addition to the differences in Wikification ratios between the domains, it seems spaCy had trouble correctly identifying named entities in both Onion and eBay sites, possibly due to the common use of informal language and drugrelated jargon.", "Eyeballing the results, there were a fair number of false positives (words and phrases that were found by spaCy but were not actually named entities), especially in Illegal Onion sites.", "In particular, slang terms for drugs, as well as abbreviated drug terms, for example \"kush\" or \"GBL\", were being falsely picked up by spaCy.", "6 To summarize, results suggest both that (1) legal and illegal texts are different in terms of their named entities and their coverage in Wikipedia, as well as that (2) standard databases and standard NLP tools for named entity recognition (and potentially other text understanding tasks), require considerable adaptation to be fully functional on text related to illegal activity.", "Classification Experiments Here we detail our experiments in classifying text from different legal and illegal domains using various methods, to find the most important linguistic features distinguishing between the domains.", "Another goal of the classification task is to confirm our finding that the domains are distinguishable.", "Experimental setup.", "We split each subset among {eBay, Legal Onion, Illegal Onion} into training, validation and test.", "We select 456 training paragraphs, 57 validation paragraphs and 58 test paragraphs for each category (approximately a 80%/10%/10% split), randomly downsampling larger categories for an even division of labels.", "Model.", "To classify paragraphs into categories, we experiment with five classifiers: • NB (Naive Bayes) classifier with binary bagof-words features, i.e., indicator feature for each word.", "This simple classifier features frequently in work on text classification in the Darknet.", "• SVM (support vector machine) classifier with an RBF kernel, also with BoW features that count the number of words of each type.", "• BoE (bag-of-embeddings): each word is represented with its 100-dimensional GloVe vector (Pennington et al., 2014) .", "BoE sum (BoE average ) sums (averages) the embeddings for all words in the paragraph to a single vector, and applies a 100-dimensional fully-connected layer with ReLU nonlinearity and dropout p = 0.2.", "The word vectors are not updated during training.", "Vectors for words not found in GloVe are set randomly ∼ N (µ GloVe , σ 2 GloVe ).", "• seq2vec: same as BoE, but instead of averaging word vectors, we apply a single-layer 100-dimensional BiLSTM to the word vectors, and take the concatenated final hidden vectors from the forward and backward part as the input to a fully-connected layer (same hyper-parameters as above).", "• attention: we replace the word representations with contextualized pre-trained representations from ELMo .", "We then apply a self-attentive classification network (McCann et al., 2017) over the contextualized representations.", "This architecture has proved very effective for classification in recent work (Tutek and Šnajder, 2018; Shen et al., 2018) .", "For the NB classifier we use BernoulliNB from scikit-learn 7 with α = 1, and for the SVM classifier we use SVC, also from scikit-learn, with γ = scale and tolerance=10 −5 .", "We use the AllenNLP library 8 to implement the neural network classifiers.", "7 https://scikit-learn.org 8 https://allennlp.org Data manipulation.", "In order to isolate what factors contribute to the classifiers' performance, we experiment with four manipulations to the input data (in training, validation and testing).", "Specifically, we examine the impact of variations in the content words, function words and shallow syntactic structure (represented through POS tags).", "For this purpose, we consider content words as words whose universal part-of-speech according to spaCy is one of the following: Results when applying these manipulations are compared to the full condition, where all words are available.", "Settings.", "We experiment with two settings, classifying paragraphs from different domains: • Training and testing on eBay pages vs. Legal drug-related Onion pages, as a control experiment to identify whether Onion pages differ from clear net pages.", "• Training and testing on Legal Onion vs.", "Illegal Onion drugs-related pages, to identify the difference in language between legal and illegal activity on Onion drug-related websites.", "Results The accuracy scores for the different classifiers and settings are reported in Table 3 .", "confirmed by the drop in accuracy when content words are removed.", "However, in this setting (drop cont.", "), non-trivial performance is still obtained by the SVM classifier, suggesting that the domains are distinguishable (albeit to a lesser extent) based on the function word distribution alone.", "Surprisingly, the more sophisticated neural classifiers perform worse than Naive Bayes.", "This is despite using pre-trained word embeddings, and architectures that have proven beneficial for text classification.", "It is likely that this is due to the small size of the training data, as well as the specialized vocabulary found in this domain, which is unlikely to be supported well by the pre-trained embeddings (see §4.2).", "Legal vs. illegal drugs.", "Classifying legal and illegal pages within the drugs domain on Onion proved to be a more difficult task.", "However, where content words are replaced with their POS tags, the SVM classifier distinguishes between legal and illegal texts with quite a high accuracy (70.7% on a balanced test set).", "This suggests that the syntactic structure is sufficiently different between the domains, so as to make them distinguishable in terms of their distribution of grammatical categories.", "Illegality Detection Across Domains To investigate illegality detection across different domains, we perform classification experiments on the \"forums\" category that is also separated into legal and illegal sub-categories in DUTA-10K.", "The forums contain user-written text in various topics.", "Legal forums often discuss web design and other technical and non-technical activity on the internet, while illegal ones involve discussions about cyber-crimes and guides on how to commit them, as well as narcotics, racism and other criminal activities.", "As this domain contains usergenerated content, it is more varied and noisy.", "Experimental setup We use the cleaning process described in Section 3 and data splitting described in Section 5, with the same number of paragraphs.", "We experiment with two settings: • Training and testing on Onion legal vs. illegal forums, to evaluate whether the insights observed in the drugs domain generalize to user-generated content.", "• Training on Onion legal vs. illegal drugsrelated pages, and testing on Onion legal vs. illegal forums.", "This cross-domain evaluation reveals whether the distinctions learned on the drugs domain generalize directly to the forums domain.", "Results Accuracy scores are reported in Table 4 .", "Legal vs. illegal forums.", "Results when training and testing on forums data are much worse for the neural-based systems, probably due to the much noisier and more varied nature of the data.", "However, the SVM model achieves an accuracy of 85.3% in the full setting.", "Good performance is presented by this model even in the cases where the content words are dropped (drop.", "cont.)", "or replaced by part-of-speech tags (pos cont.", "), underscoring the distinguishability of legal in illegal content based on shallow syntactic structure in this domain as well.", "Cross-domain evaluation.", "Surprisingly, training on drugs data and evaluating on forums performs much better than in the in-domain setting for four out of five classifiers.", "This implies that while the forums data is noisy, it can be accurately classified into legal and illegal content when training on the cleaner drugs data.", "This also shows that illegal texts in Tor share common properties regardless of topical category.", "The much lower results obtained by the models where content words are dropped (drop cont.)", "or converted to POS tags (pos cont.", "), namely less than 70% as opposed to 89.7% when function words are dropped, suggest that some of these properties are lexical.", "Discussion As shown in Section 4, the Legal Onion and Illegal Onion domains are quite distant in terms of word distribution and named entity Wikification.", "Moreover, named entity recognition and Wikification work less well for the illegal domain, and so do state-of-the-art neural text classification architectures (Section 5), which present inferior re- sults to simple bag-of-words model.", "This is likely a result of the different vocabulary and syntax of text from Onion domain, compared to standard domains used for training NLP models and pretrained word embeddings.", "This conclusion has practical implications: to effectively process text in Onion, considerable domain adaptation should be performed, and effort should be made to annotate data and extend standard knowledge bases to cover this idiosyncratic domain.", "Another conclusion from the classification experiments is that the Onion Legal and Illegal Onion texts are harder to distinguish than eBay and Legal Onion, meaning that deciding on domain boundaries should consider syntactic structure, and not only lexical differences.", "Analysis of texts from the datasets.", "Looking at specific sentences (Figure 1 ) reveals that Legal Onion and Illegal Onion are easy to distinguish based on the identity of certain words, e.g., terms for legal and illegal drugs, respectively.", "Thus looking at the word forms is already a good solu-tion for tackling this classification problem, which gives further insight as to why modern text classification (e.g., neural networks) do not present an advantage in terms of accuracy.", "Analysis of manipulated texts.", "Given that replacing content words with their POS tags substantially lowers performance for classification of legal vs illegal drug-related texts (see \"pos cont.\"", "in Section 5), we conclude that the distribution of parts of speech alone is not as strong a signal as the word forms for distinguishing between the domains.", "However, the SVM model does manage to distinguish between the texts even in this setting.", "Indeed, Figure 2 demonstrates that there are easily identifiable patterns distinguishing between the domains, but that a bag-of-words approach may not be sufficiently expressive to identify them.", "Analysis of learned feature weights.", "As the Naive Bayes classifier was the most successful at distinguishing legal from illegal texts in the full setting (without input manipulation), we may conclude that the very occurrence of certain words provides a strong indication that an instance is taken from one class or the other.", "Table 5 shows the most indicative features learned by the Naive Bayes classifier for the Legal Onion vs.", "Illegal Onion classification in this setting.", "Interestingly, many strong features are function words, providing another indication of the different distribution of function words in the two domains.", "Conclusion In this paper we identified several distinguishing factors between legal and illegal texts, taking a variety of approaches, predictive (text classification), application-based (named entity Wikification), as well as an approach based on raw statistics.", "Our results revealed that legal and illegal texts on the Darknet are not only distinguishable in terms of their words, but also in terms of their shallow syntactic structure, manifested in their POS tag and function word distributions.", "Distinguishing features between legal and illegal texts are consistent enough between domains, so that a classifier trained on drug-related websites can be straightforwardly ported to classify legal and illegal texts from another Darknet domain (forums).", "Our results also show that in terms of vocabulary, legal texts and illegal texts are as distant from each other, as from comparable texts from eBay.", "We conclude from this investigation that Onion pages provide an attractive testbed for studying distinguishing factors between the text of legal and illegal webpages, as they present challenges to offthe-shelf NLP tools, but at the same time have sufficient self-consistency to allow studies of the linguistic signals that separate these classes." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "5", "5.1", "6", "6.1", "6.2", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets Used", "Domain Differences", "Vocabulary Differences", "Differences in Named Entities", "Classification Experiments", "Results", "Illegality Detection Across Domains", "Experimental setup", "Results", "Discussion", "Conclusion" ] }
GEM-SciDuet-train-54#paper-1096#slide-17
Conclusion
Language of legalillegal Darknet is different: As different as Darknet vs. eBay. eBay Can be distinguished just by POS. Legal Observed through multiple lenses:
Language of legalillegal Darknet is different: As different as Darknet vs. eBay. eBay Can be distinguished just by POS. Legal Observed through multiple lenses:
[]
GEM-SciDuet-train-55#paper-1103#slide-0
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-0
The weighted parsing problem
e.g., English sentences ( ) e.g., Fruit flies like bananas Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) e.g., abstract syntax trees (ASTs) value in a weight algebra parse syntactic object parse Fruit flies max Semiring parsing (Goodman 1999) recognition string probability probability of best derivation derivation forest best derivation(s) best derivation(s) Parsing with superior grammars (Knuth 1977; Nederhof 2003) Algebraic dynamic programming (Giegerich, Meyer, and Steffen 2004) minimum edit distance matrix chain multiplication Reduct of a grammar and a syntactic object (cf. Bar-Hillel, Perles, and
e.g., English sentences ( ) e.g., Fruit flies like bananas Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) e.g., abstract syntax trees (ASTs) value in a weight algebra parse syntactic object parse Fruit flies max Semiring parsing (Goodman 1999) recognition string probability probability of best derivation derivation forest best derivation(s) best derivation(s) Parsing with superior grammars (Knuth 1977; Nederhof 2003) Algebraic dynamic programming (Giegerich, Meyer, and Steffen 2004) minimum edit distance matrix chain multiplication Reduct of a grammar and a syntactic object (cf. Bar-Hillel, Perles, and
[]
GEM-SciDuet-train-55#paper-1103#slide-1
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-1
Regular tree grammars RTG
T abstract syntax tree AST() Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
T abstract syntax tree AST() Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
[]
GEM-SciDuet-train-55#paper-1103#slide-2
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-2
Language algebras
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) interpretation of as operations on the set of syntactic objects T (terms) (syntactic objects) factors(Fruit flies like bananas) = {Fruit, like bananas, } Fruit flies S (NP,VP)
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) interpretation of as operations on the set of syntactic objects T (terms) (syntactic objects) factors(Fruit flies like bananas) = {Fruit, like bananas, } Fruit flies S (NP,VP)
[]
GEM-SciDuet-train-55#paper-1103#slide-3
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-3
Semirings
Algebraic structure is used to evaluate an AST to a weight accumulates the weights of several ASTs Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) Examples false, true) the Boolean semiring with = {false, true} the semiring of natural numbers
Algebraic structure is used to evaluate an AST to a weight accumulates the weights of several ASTs Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) Examples false, true) the Boolean semiring with = {false, true} the semiring of natural numbers
[]
GEM-SciDuet-train-55#paper-1103#slide-4
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-4
Multioperator monoids M monoids
binary set of -ary operations (here: distributive) Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) Minimum edit distance M-monoid },min ,,med)
binary set of -ary operations (here: distributive) Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) Minimum edit distance M-monoid },min ,,med)
[]
GEM-SciDuet-train-55#paper-1103#slide-5
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-5
Weight algebras
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) wt: (set of rules) (set of operations) T (terms) (weight algebra)
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) wt: (set of rules) (set of operations) T (terms) (weight algebra)
[]
GEM-SciDuet-train-55#paper-1103#slide-6
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-6
Weighted RTG based language models
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) A wRTG-LM is a tuple wt RTG language algebra M-monoid wt:
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) A wRTG-LM is a tuple wt RTG language algebra M-monoid wt:
[]
GEM-SciDuet-train-55#paper-1103#slide-7
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-7
Weighted parsing algorithm
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) - wRTG-LM canonical weighted deduction system wRTG-LM
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) - wRTG-LM canonical weighted deduction system wRTG-LM
[]
GEM-SciDuet-train-55#paper-1103#slide-8
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-8
Canonical weighted deduction system
Parsing as deduction (Shieber, Schabes, and Pereira 1995) is a rule factors() Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
Parsing as deduction (Shieber, Schabes, and Pereira 1995) is a rule factors() Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
[]
GEM-SciDuet-train-55#paper-1103#slide-9
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-9
Value computation algorithm
Input: a wRTG-LM wt with Output: for each do for each do new for each in do if () = new then Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
Input: a wRTG-LM wt with Output: for each do for each do new for each in do if () = new then Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
[]
GEM-SciDuet-train-55#paper-1103#slide-10
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-10
Value computation algorithm example
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019)
[]
GEM-SciDuet-train-55#paper-1103#slide-11
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-11
Termination and correctness
- wRTG-LM canonical weighted deduction system wRTG-LM Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) closed value computation algorithm closed weight preserving value computation algorithm Sufficient: ((,),,wt is closed or nonlooping e.g., acyclic RTGs, superior M-monoids, algebraic dynamic programming
- wRTG-LM canonical weighted deduction system wRTG-LM Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) closed value computation algorithm closed weight preserving value computation algorithm Sufficient: ((,),,wt is closed or nonlooping e.g., acyclic RTGs, superior M-monoids, algebraic dynamic programming
[]
GEM-SciDuet-train-55#paper-1103#slide-12
1103
Weighted parsing for grammar-based language models
We develop a general framework for weighted parsing which is built on top of grammarbased language models and employs flexible weight algebras. It generalizes previous work in that area (semiring parsing, weighted deductive parsing) and also covers applications outside the classical scope of parsing, e.g., algebraic dynamic programming. We show an algorithm which terminates and is correct for a large class of weighted grammar-based language models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415 ], "paper_content_text": [ "Introduction The weighted parsing problem takes as input a weighted language model (LM) and a syntactic object a.", "For instance, the LM can be given by some grammar G, e.g., a context-free grammar (CFG) or a linear context-free rewriting system (LCFRS), and a can be some string.", "Each rule r of G has a value (weight of r); the weight is an element of some weight algebra K. That algebra has a binary commutative and associative operation ⊕ on its carrier set, which is used to handle ambiguity of G. As output we expect an element k ∈ K which is the ⊕-accumulation of the weight wt(d) K of each abstract syntax tree (AST) d of a in G, i.e., k = ⊕ d∈AST(G,a) wt(d) K where wt(d) K is computed by other operations of the algebra K (using the weights of the occurring rules) and ⊕ is an extension of ⊕ to infinitely many summands (infinitary sum operation).", "For instance, if K = [0, 1] is the set of probabilities, ⊕ = max, ⊕ = sup, and wt(d) K is the product of all weights of occurrences of rules in d, then k is the maximal probability of an AST of a in G. Goodman (1999) developed a formal framework for weighted parsing, called semiring parsing.", "As weight algebras he used complete semirings (K, ⊕, ⊗, 0, 1, ⊕ ) (Eilenberg, 1974) , i.e., ⊕ is the infinitary sum operation extending ⊕.", "The binary operation ⊗ is used to compute wt(d) K .", "By appropriate choices of the complete semiring, he formalized the following problems as weighted parsing problems for a CFG G and a: the calculation of recognition, string probabilities, number of derivations, derivation forests, probability of best derivation, best derivation, and best n derivations.", "The algorithm which he proposed for solving the weighted parsing problem is a pipeline with two phases.", "In the first phase, the CFG G, a deduction system I (Shieber et al., 1995) , and the syntactic object a (i.e., a string) are combined into a single CFG G (using a construction idea of Bar-Hillel et al., 1961) .", "In the second phase, the value k (from above) is calculated, if G is acyclic.", "1 Nederhof (2003) developed a similar framework, called weighted deductive parsing.", "As weight algebras he employed algebras of the form (K, min, 0, Ω, min ) where K is a totally ordered set, min = inf (infimum), inf(K) ∈ K, and Ω is a set of superior functions; a superior function f is an operation on K which is monotone nondecreasing in each argument and f (k 1 , .", ".", ".", ", k m ) ≥ max(k 1 , .", ".", ".", ", k m ) holds.", "The algorithm which he proposed for solving the weighted parsing problem is again a pipeline with two phases, where the first phase is the same as in the framework of Goodman (1999) and the resulting CFG G is denoted by c(G, a).", "In the second phase, he employed the algorithm of Knuth (1977) , which generalizes the shortest distance algorithm of Dijkstra (1959) from graphs to hypergraphs and also works if G is cyclic.", "If the CFG G is non-branching, i.e., a linear grammar (Khabbaz, 1974, Def.", "1) , then in the second phase a graph algorithm can be used as an alternative to Knuth's algorithm; e.g., the single source shortest distance algorithm of Mohri (2002) if the weight algebra K is a complete semiring which is closed for G .", "In this paper, we generalize the two-phase pipeline approach of Goodman (1999) and Nederhof (2003) as follows.", "We specify the LM by using the general approach of initial algebra semantics (Goguen et al., 1977) .", "For this, we employ weighted regular tree grammars (wRTG) and evaluate the generated trees (by the unique homomorphism) in some language algebra L, which provides the set of syntactic objects as carrier set.", "This approach is very flexible and covers LMs for strings (CFG, LCFRS), but also trees and graphs (Drewes et al., 2016) .", "Our second generalization concerns the weight algebra.", "We consider complete multioperator monoids (Kuich, 1999) which are algebras of the form (K, ⊕, 0, Ω, ⊕ ), where Ω is a set of operations on K and ⊕ is the infinitary sum operation which extends ⊕.", "We call the combination of such an LM and weight algebra weighted RTG-based language model (wRTG-LM).", "These combinations are very general and even exceed the scope of parsing; e.g., each algebraic dynamic programming problem (Giegerich et al., 2004) , like minimum edit distance or matrix chain multiplication, can be formalized within this framework.", "For solving the weighted parsing problem, given a wRTG-LM and a syntactic object a, we formalize the first phase as canonical weighted deduction system, which uses a CYK-like deduction system.", "For the second phase (value computation algorithm), we propose a generalization of Mohri's approach to hypergraphs, in the spirit of Knuth's generalization of Dijkstra's algorithm.", "We prove (in sketches) that our weighted parsing algorithm is terminating and solves the weighted parsing problem for every closed wRTG-LM with a finitely decomposing language algebra.", "This covers the approaches of Goodman (1999) and Nederhof (2003) ; our value computation algorithm subsumes the algorithms of Knuth (1977) and Mohri (2002) .", "Due to space restrictions, we cannot show our detailed proofs of the theorems in this paper.", "Preliminaries Mathematical notions.", "We let N = {0, 1, 2, .", ".", ".}", "be the set of natural numbers and [m] = {1, .", ".", ".", ", m} for each m ∈ N. An alphabet is a finite, nonempty set.", "The powerset of a set A is denoted by P(A).", "Let f : A → B be a mapping; we extend it to the mapping f : P(A) → P(B) by letting f (U) = { f (a) | a ∈ U}, and we denote f also by f .", "A family over A is a mapping f : I → A, where I is a countable set (index set).", "As usual, we represent each family f over A by ( f (i) | i ∈ I) and abbreviate f (i) by f i .", "Ranked sets, trees, and regular tree grammars.", "A ranked set is a set Γ such that each γ ∈ Γ is associated with a natural number rk Γ (γ), its rank.", "The set of all elements of Γ with rank m ∈ N is denoted by Γ m .", "A ranked set Σ with Σ ⊆ Γ is rank preserving (in Γ) if Σ m ⊆ Γ m for each m ∈ N. Let H be a set.", "The set of trees over Γ and H is defined in the usual way, where elements of H may only occur at leaves.", "We denote this set by T Γ (H) and abbreviate T Γ (∅) by T Γ .", "Let t ∈ T Γ (H).", "A path in t is a sequence of positions of d from the root to a leaf.", "Let p be a path in t. The sequence of labels of d along p is denoted by seq (d, p) .", "A ranked alphabet is a ranked set which is an alphabet.", "A regular tree grammar (RTG) (Brainerd, 1969 ) is a tuple G = (N, Σ, A 0 , R) where N is an alphabet (nonterminals), Σ is a ranked alphabet (terminals) with N ∩ Σ = ∅, A 0 ∈ N (initial nonterminal), and R is finite set of rules where each rule has the form A → σ(A 1 , .", ".", ".", ", A m ) with m ∈ N, A, A 1 , .", ".", ".", ", A m ∈ N, and σ ∈ Σ m .", "Each RTG G can be considered as a context-free grammar G (with terminal alphabet Σ ∪ {(, ), comma}), which generates well-formed expressions.", "Thus the derivation relation ⇒ G is the usual derivation relation of G .", "The tree language generated by G is the set L(G) = {t ∈ T Σ | A 0 ⇒ * G t}.", "By viewing each rule A → σ(A 1 , .", ".", ".", ", A m ) of R as symbol with rank m, we can define the set AST(G) of abstract syntax trees of G to be the set of all d ∈ T R such that for each position w of d the following holds: if d has label A → σ(A 1 , .", ".", ".", ", A m ) at w, then the i-th successor of w (i ∈ [m]) is labeled by a rule with left-hand side A i (cf.", "Fig.", "2 ).", "We define the mapping π Σ : AST(G) → T Σ such that π Σ (d) is obtained from d by replacing each label A → σ(A 1 , .", ".", ".", ", A m ) by σ (cf.", "Fig.", "2 ).", "Hence π Σ (AST(G)) = L(G).", "Γ-algebras.", "Let Γ be a ranked set.", "A Γ-algebra (or: algebra) is a pair (A, φ) where A is a set (carrier set) and φ is a mapping (interpretation map-ping) which maps each γ ∈ Γ m (m ∈ N) to an mary operation φ(γ) over A, i.e., φ(γ): A m → A.", "In the sequel, we will sometimes identify φ(γ) and γ (as it is usual in algebra).", "The Γ-term algebra is the Γ-algebra (T Γ , φ Γ ) where φ Γ (γ)(t 1 , .", ".", ".", ", t m ) = γ(t 1 , .", ".", ".", ", t m ) for every m ∈ N, γ ∈ Γ m , and t 1 , .", ".", ".", ", t m ∈ T Γ .", "For each Γ-algebra (A, φ) there is exactly one homomorphism, denoted by (.)", "A , from the Γ-term algebra to (A, φ) (Wechler, 1992) .", "We write its application to an argument t ∈ T Γ as t A .", "Intuitively, (.)", "A evaluates a tree t in (A, φ), in the same way as arithmetic expressions (e.g., 3 + 2 · (4 + 5)) are evaluated in the {+, ·}-algebra (Z, +, ·) to some values (here: 21).", "Often we abbreviate an algebra (A, φ) by its carrier set A.", "For every a ∈ A we let factors (a) = {b ∈ A | b < factor * a}, where for every a, b ∈ A, b < factor a if there is a γ ∈ Γ such that b occurs in some tuple (b 1 , .", ".", ".", ", b m ) with φ(γ)(b 1 , .", ".", ".", ", b m ) = a.", "We call (A, φ) finitely de- composable if factors(a) is finite for every a ∈ A. Monoids.", "A monoid is an algebra (K, ⊕, 0) such that ⊕ is a binary, associative operation on K and 0 ⊕ k = k = k ⊕ 0 for each k ∈ K. In the rest of this paper, each occurrence of k, k 1 , k 2 , .", ".", ".", "is assumed to be universally quantified over K if not specified otherwise.", "The monoid is commutative if ⊕ is commutative; it is extremal (Mahr, 1984) if k 1 ⊕k 2 ∈ {k 1 , k 2 }; it is idempotent if k⊕k = k. It is naturally ordered if the binary relation ⊆ K × K (defined by k 1 k 2 if there is a k ∈ K such that k 1 ⊕k = k 2 ) is anti-symmetric (in which case it is a partial order, since reflexivity and transitivity hold by definition).", "It is complete if for each countable set I, there is an operation ⊕ I which maps each family (k i | i ∈ I) to an element of K, coincides with ⊕ when I is finite, and otherwise satisfies axioms which guarantee commutativity and associativity (Eilenberg, 1974, p. 124) .", "We abbreviate ⊕ (Karner, 1992) if for every k ∈ K and family (k i | i ∈ N) of elements of K the following holds: if there is an n 0 ∈ N such that for every n ∈ N with n ≥ n 0 , ⊕ i∈N:i≤n k i = k, then ⊕ i∈N k i = k. A complete monoid is completely idempotent if for every k ∈ K and countable set I it holds that ⊕ i∈I k = k. By easy calculations we obtain the following implications: (1) if K is extremal, then it is idempotent, (2) if K is completely idempotent, then it is d-complete, and (3) if K is d-complete, then it is naturally ordered.", "I (k i | i ∈ I) by ⊕ i∈I k i .", "A complete monoid is d-complete Multioperator monoids.", "A multioperator monoid (M-monoid) (Kuich, 1999) is an algebra (K, ⊕, 0, Ω) such that (K, ⊕, 0) is a commutative monoid and Ω is a set of operations on K which contains at least the unary identity id: K → K. We view Ω as a ranked set, and hence (K, φ) as an Ω-algebra where φ(ω) = ω for each ω ∈ Ω.", "Thus t K ∈ K is the evaluation of t ∈ T Ω in the algebra (K, φ).", "An M-monoid inherits the properties of its monoid (e.g., being complete).", "We denote a complete M-monoid by (K, ⊕, 0, Ω, ⊕ ).", "An M-monoid is distributive if for each m-ary ω ∈ Ω and every i ∈ [m], ω(k 1,i−1 , k i ⊕ k, k i+1,m ) = ω(k 1,i−1 , k i , k i+1,m ) ⊕ ω(k 1,i−1 , k, k i+1,m ) where k 1,i−1 and k i+1,m abbreviate k 1 , .", ".", ".", ", k i−1 and k i+1 , .", ".", ".", ", k m , respectively.", "If K is complete, then we additionally require that the above equation also holds for each countable set of summands.", "Next we show examples of M-monoids.", "• Each semiring (K, ⊕, ⊗, 0, 1) can be considered as the M-monoid (K, ⊕, 0, Ω ⊗ ) (Fülöp et al., 2009) where Knuth (1977) uses complete, distributive Mmonoids of the form (K, min, 0, Ω, min ) where K is a totally ordered set, inf(K) ∈ K, and the operations in Ω are superior functions.", "We will call such M-monoids superior M-monoids.", "We note that each superior M-monoid is dcomplete.", "Ω ⊗ = {mul (m) k | m ∈ N, k ∈ K} and for every m ∈ N we define mul (m) k (k 1 , .", ".", ".", ", k m ) = k ⊗ k 1 ⊗ · · · ⊗ k m .", "Note that 1 = mul (0) 1 ().", "• 3 Weighted RTG-based language models and the weighted parsing problem As framework for the definition of our language models we use the initial algebra approach (Goguen et al., 1977) .", "An RTG-based language model (RTG-LM) is a tuple (G, (L, φ)) where • G = (N, Σ, A 0 , R) is an RTG and • (L, φ) is a Γ-algebra (language algebra) such that Σ ⊆ Γ is rank preserving; the elements of L are called syntactic objects.", "The language generated by (G, (L, φ)) is the set from evaluating trees of L(G) in the language algebra L. For each a ∈ L, we let L(G) L = {t L | t ∈ L(G)} ⊆ L , i.e., AST(G, a) = {d ∈ AST(G) | π Σ (d) L = a} .", "Example 1.", "We consider the Γ-algebra CFG ∆ = (∆ * , φ) as language algebra where ∆ = {fruit, flies, like, bananas}, Γ = m∈N Γ m , and Γ m = { u 0 x 1 u 1 · · · x m u m | u i ∈ ∆ * }.", "We define φ( u 0 x 1 u 1 · · · x m u m )(a 1 , .", ".", ".", ", a m ) = u 0 a 1 u 1 · · · a m u m for every a 1 , .", ".", ".", ", a m ∈ ∆ * .", "We consider the RTG G = (N, Σ, S, R) with N = {S, NP, VP, PP, NN, NNS, VBZ, VBP, IN} and Σ = { δ | δ ∈ ∆} ∪ { x 1 , x 1 x 2 } ⊆ Γ, and R contains the rules shown in Fig.", "1 (ignoring the numbers above the arrows for the time being).", "The tree in the middle of the upper row of Fig.", "2 is an abstract syntax tree d ∈ AST(G).", "It expresses that certain insects (fruit flies) like something (bananas).", "We obtain π Σ (d) by dropping the non-highlighted parts of d (left of upper row).", "The application of the homomorphism (.)", "CFG ∆ : T Σ → CFG ∆ to π Σ (d) yields the string a = fruit flies like bananas.", "We note that there is another abstract syntax tree d ∈ AST(G), viz., d = r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 )))) such that π Σ (d ) CFG ∆ = a.", "It expresses how fruit performs a certain activity (to fly like bananas).", "Hence this RTG-LM is ambiguous.", "It should be clear from Ex.", "1 that each contextfree grammar with terminal alphabet ∆ can be represented as an RTG-LM (G, CFG ∆ ), and vice versa, each RTG-LM (G, CFG ∆ ) represents a CFG.", "In the same way, one can characterize LCFRS and tree adjoining grammars by (1) superposing sorts to the set N of nonterminals of the RTG (in order to represent fanout and the characteristic \"substitution tree / adjoining tree\" of arguments, respectively), and (2) by defining ap-propriate Γ-algebras LCFRS ∆ (Kallmeyer, 2010, Def.", "6.2+6 .3) and TAG ∆ (Büchse et al., 2012; Koller and Kuhlmann, 2012) , respectively.", "The language algebras CFG ∆ , LCFRS ∆ , and TAG ∆ are finitely decomposable.", "A weighted RTG-based language model (wRTG-LM) is a tuple (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt , where • (G, (L, φ)) is an RTG-LM, • (K, ⊕, 0, Ω, ⊕ ) is a complete M-monoid (weight algebra), and • wt maps each rule of G with rank m to an mary operation in Ω.", "We lift wt to the mapping wt : T R → T Ω and denote wt also by wt.", "Definition 2.", "The weighted parsing problem is the following problem: given a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt and an a ∈ L, compute the value parse(a) ∈ K where parse(a) = ⊕ d∈AST(G,a) wt(d) K .", "Example 3.", "(Ex.", "1 cont.)", "The best derivation problem of (Goodman, 1999) consists of computing, given a syntactic object a and a grammar, the abstract syntax trees of a with maximal probability (and this probability).", "Let R ∞ be a ranked set such that (R ∞ ) m is infinite for each m ∈ N. In analogy to Goodman, we define the best derivation Mmonoid to be the d-complete M-monoid BD = V, max BD , (0, ∅), Ω BD , max BD , where V = [0, 1] × P(T R ∞ ) and [0, 1] is the interval of real numbers from 0 to 1 and • for every (p 1 , D 1 ), (p 2 , D 2 ) ∈ V, the value max BD ((p 1 , D 1 ), (p 2 , D 2 )) is (p i , D i ) if p i > p j for i, j ∈ {1, 2}, and (p 1 , D 1 ∪ D 2 ) if p 1 = p 2 , • Ω BD = {tc p,r | p ∈ [0, 1] and r ∈ R ∞ }, where for each p ∈ [0, 1] and r ∈ R ∞ of rank m, we define tc p,r : V m → V (tc abbreviates top concatenation) such that for every (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) ∈ V tc p,r (p 1 , D 1 ), .", ".", ".", ", (p m , D m ) = (p , D ) where p = p · p 1 · .", ".", ".", "· p m and D = {r(d 1 , .", ".", ".", ", d m ) | d i ∈ D i , 1 ≤ i ≤ m}, and • for every family ((p i , D i ) | i ∈ I) over V, we define max BD i∈I (p i , D i ) = (p, D), where p = sup{p i | i ∈ I} and D = i∈I:p i =p D i .", "Since BD is completely idempotent, it is also dcomplete.", "x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas S → NP → NN → NNS → VP → VBP → NP → NNS → (NP, VP) (NN, NNS) (VBP, NP) (NNS) d ∈ AST(G) x 1 x 2 x 1 x 2 fruit flies x 1 x 2 like x 1 bananas t ∈ T Σ tc 1.0,r1 tc 0.5,r3 tc 1.0,r8 tc 0.4,r9 tc 0.6,r6 tc 1.0,r12 tc 0.3,r4 tc 0.6,r10 in T Ω 0.0216, {r 1 (r 3 (r 8 , r 9 ), r 6 (r 12 , r 4 (r 10 )))} 0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))} max BD a = fruit flies like bananas Figure 2 : Illustration of the weighted parsing problem for the wRTG-LM (G, CFG ∆ ), BD, wt and the syntactic object a = fruit flies like bananas of ∆ * , see Ex.", "3.", "Now we consider the finite set R of rules of the RTG G given in Ex.", "1.", "We can assume that R ⊆ R ∞ is rank preserving.", "We define the mapping wt: R → Ω BD by wt(r i ) = tc p i ,r i where p i is shown in Fig.", "1 above the arrow of r i .", "For each d ∈ AST(G, a), the second component of wt(d) BD has exactly one element.", "Recall d from Ex.", "1, a second AST which is evaluated to a.", "We obtain wt(d ) BD = (0.0144, {r 1 (r 2 (r 8 ), r 5 (r 11 , r 7 (r 13 , r 4 (r 10 ))))}).", "Thus wt(d ) ∈ T Ω d ∈ AST(G) π Σ (d ) ∈ T Σ π Σ wt (.)", "CFG ∆ (.)", "BD (.)", "BD wt π Σ (.)", "CFG ∆ parse max BD wt(d) BD , wt(d ) BD = wt(d) BD .", "As one might expect, it is more likely that a refers to the preferences (to like bananas) of certain insects (fruit flies).", "Fig.", "2 illustrates the parsing problem for the wRTG-LM ((G, CFG ∆ ), BD, wt) and a = fruit flies like bananas.", "In summary, each wRTG-LM consists of two components: a syntax component and a weight component.", "The syntax component (cf.", "the left of Fig.", "2 ) contains the language algebra (L, φ).", "This is a Γ-algebra whose carrier set is the set of syntactic objects.", "The mapping π Σ maps each abstract syntax tree to a tree in the Σ-term algebra T Σ , which is then evaluated to a syntactic object by the unique homomorphism (.)", "L (recall that Σ ⊆ Γ).", "The weight component (cf.", "the right of Fig.", "2 ) contains a complete M-monoid (K, ⊕, 0, Ω, ⊕ ) whose carrier set is the set of weights.", "The mapping wt maps each abstract syntax tree to a tree in the Ω-term algebra T Ω , which is then evaluated to a weight in K by the unique homomorphism (.)", "K .", "Weights in K are accumulated using ⊕.", "The weighted parsing problem takes as input a wRTG-LM and a syntactic object a, and it computes the ⊕-accumulation of the weights of each AST of a.", "A → del a (A) φ(del a )(w) = aw del a (n) = n + 1 A → ins a (A) φ(ins a )(w) = wa ins a (n) = n + 1 A → rep a,b (A) φ(rep a,b )(w) = awb rep a,b (n) = n A → nil φ(nil)() = $ nil() = 0 Example 4.", "Giegerich et al.", "(2004) formalized dynamic programming (Bellman, 1952 (Bellman, , 1954 in an algebraic setting, called algebraic dynamic programming (ADP).", "We claim that each ADP problem is a weighted parsing problem.", "To support this statement, we consider the computation of the minimum edit distance (med) between two words over some alphabet ∆ by deletion, insertion, and replacement, and we \"simulate\" its ADPspecification as wRTG-LM ((G, (L, φ)), K, wt).", "The rules of the RTG G and the interpretation φ are shown in the first and second columns of Fig.", "3 , respectively.", "Thus, for each tree t ∈ L(G), t L = u$v for some u, v ∈ ∆ * .", "We choose the complete, distributive M-monoid (K, ⊕, ∅, Ω, ⊕ ) with K = {h(F) | F ∈ P(N)} for the singlevalued objective function h: P(N) → P(N) with h(F) = {min(F)}.", "We let F 1 ⊕ F 2 = h(F 1 ∪ F 2 ) for every F 1 , F 2 ∈ K, and ⊕ i∈N F i = {inf( i∈N F i )}.", "The set Ω is shown in the third column of Fig.", "3 .", "Note that h satisfies Bellman's principle of optimality: h(ω(F)) = h(ω(h(F))) for each unary ω ∈ Ω and F ∈ K. Then med (u, v) = parse(u$v −1 ) for every u, v ∈ ∆ * , where v −1 is the reversal of v. This construction can be generalized to a procedure which turns every specification of an ADP problem into a weighted parsing problem.", "Due to space restrictions, we cannot present this procedure in its entirety.", "The weighted parsing algorithm The weighted parsing algorithm is supposed to solve the weighted parsing problem.", "As input, it takes a wRTG-LM G and a syntactic object a.", "Its output is intended to be parse(a).", "The algorithm is a pipeline with two phases (cf.", "Fig.", "4 ) and follows the modular approach of Nederhof (2003) .", "First, a canonical weighted deduction system computes from G and a a new wRTG-LM G with the same weight structure as G, but a different RTG and the language algebra CFG ∅ .", "Second, G is the input to the value computation algorithm (Alg.", "1), which computes the value V(A 0 ); this is supposed to be ⊕ d∈AST(G ) wt(d) K = parse(a).", "Weighted deduction systems.", "Parsing of some string w with some grammar G can be formalized as a deduction system D (Shieber et al., 1995) .", "D consists of a set of inference rules I 1 ...", "I m I {c 1 , .", ".", ".", ", c p } where m ∈ N, I, I 1 , .", ".", ".", ", I m are items, and c 1 , .", ".", ".", ", c p are side conditions.", "Each item represents a Boolean-valued property (of some combination of nonterminals of G and/or substrings of a = w).", "The meaning of an inference rule is: given that I 1 , .", ".", ".", ", I m and c 1 , .", ".", ".", ", c p are true, I is true as well.", "Nederhof (2003) pointed out that \"a deduction system having a grammar G [...] and input string w in the side conditions can be seen as a construction c of a context-free grammar c(G, w) [...]\"; also, he extended D and c(G, a) with weights.", "Inspired by this, we define the canonical weighted deduction system as a mapping cwds which takes two arguments: (a) a wRTG-LM G = (G, L), K, wt such that the language algebra (L, φ) is finitely decomposable and (b) a syntactic object a ∈ L. Let G = (N, Σ, A 0 , R) .", "Then we define cwds G, a = (G , CFG ∅ ), K, wt , where G = (N , Σ , A 0 , R ) and a) , and • for each σ ∈ Σ, the rule r = (A 0 , a) → (A 0 , σ, a) is in R and wt (r ) = id; for each r = A → σ(A 1 , .", ".", ".", ", A m ) in R and a 0 , a 1 , .", ".", ".", ", a m ∈ factors(a) with φ(σ)(a 1 , .", ".", ".", ", a m ) = a 0 and every • N = {(A 0 , a)} ∪ N × Σ × factors(a) ; N is finite, because L is finitely decomposable, • Σ = { x 1 .", ".", ".", "x m | a rule with rank m is in R}, • A 0 = (A 0 , rule A i → σ i (.", ".", ". )", "(i ∈ [m]) in R, the rule r (A, σ, a 0 ) → x 1 .", ".", ".", "x m (A 1 , σ 1 , a 1 ), .", ".", ".", ", (A m , σ m , a m ) is in R and we let wt (r ) = wt(r).", "Note that cwds implements a CYK-like deduction system.", "The elements of N have a very general form.", "Depending on L, they can be understood as, e.g., spans of strings, occurrences of patterns in trees, or occurrences of subgraphs in graphs.", "We note that for every d ∈ AST(G ) it holds that π Σ (d) CFG ∅ = ε, i.e., each abstract syntax tree is evaluated to the empty string.", "Moreover, cwds is weight-preserving in the following sense: (1) there is a bijective mapping ψ from the set AST(G, a) to AST(G ) and (2) for every d ∈ AST(G, a) we have that wt(d) K = wt (ψ(d)) K .", "Value computation algorithm.", "This is Alg.", "1.", "Its input is a wRTG-LM G with language algebra CFG ∅ .", "It maintains a mapping V, which assigns a weight to each nonterminal, and a Boolean variable changed.", "The output is the value V(A 0 ).", "The algorithm starts by assigning the weight 0 to each nonterminal (lines 1-2).", "Then, in a repeat-until loop (lines 3-12), the weight of each nonterminal is recomputed in every iteration of that loop as follows (where x 1,m abbreviates x 1 , .", ".", ".", ", x m ): V(A) = r∈R : r=(A→ x 1,m (A 1 ,...,A m )) wt (r) V(A 1 ), .", ".", ".", ", V(A m ) .", "Algorithm 1 Value computation algorithm Input: (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt which is a wRTG-LM with G = (N , Σ , A 0 , R ) Variables: V: N → K, V ∈ K, changed ∈ B Output: V(A 0 ) 1: for each A ∈ N do 2: V(A) ← 0 3: repeat 4: changed ← false 5: for each A ∈ N do 6: V ← 0 7: for each r = (A → x 1,m (A 1 , .", ".", ".", ", A m )) in R do 8: V ← V ⊕ wt (r) V(A 1 ), .", ".", ".", ", V(A m ) 9: if V(A) V then 10: changed ← true 11: V(A) ← V 12: until changed = false The algorithm terminates after the first iteration in which no nonterminal has changed its weight.", "We note that in practice, a complete computation of cwds(G, a) prior to the execution of the value computation algorithm (Alg.", "1) is impossible.", "Similar to Nederhof (2003) , we execute the value computation algorithm on an incomplete input which is extended on demand (lazy evaluation).", "More precisely, G is initialized so that it only contains the rules of rank 0 (and the nonterminals in their left-hand sides).", "Then, each time a value different from 0 is first assigned to a nonterminal A in line 11, we compute the following set of rules: each rule whose right-hand side only contains A and other nonterminals for which this computation has already been done is in that set.", "These new rules (and the nonterminals in their left-hand sides) are added to G .", "Termination and correctness We are interested in two formal properties of the value computation algorithm (Alg.", "1) and of the weighted parsing algorithm (Fig.", "4) : termination and correctness.", "The value computation algorithm computes the weights of the ASTs bottom-up and reuses the results of common subtrees (as in dynamic programming); this requires distributivity of the weight algebra.", "Moreover, solving the weighted parsing problem by a terminating algorithm involves the following difficulty: there may be infinitely many ASTs (due to cycles) which are evaluated to the same syntactic object a.", "Thus parse(a) is an infinite sum, which in general cannot be computed in finite time.", "Hence, a terminating algorithm can only solve the weighted parsing problem if the infinite sum is equal to the sum over some finite subset of the infinite sum's index set.", "We have organized this section as follows.", "In Subsection 5.1 we define the class of closed wRTG-LMs (similar to Mohri, 2002) and prove that the value computation algorithm (Alg.", "1) is terminating and correct for closed wRTG-LMs as input.", "We say that the value computation algorithm is correct if after termination V(A 0 ) = ⊕ d∈AST(G ) wt (d) K .", "In Subsection 5.2 we prove that the weighted parsing algorithm (Fig.", "4) is terminating and correct for two classes of inputs.", "We say that the weighted parsing algorithm is correct if it computes parse(a).", "Properties of the value computation algorithm Since each wRTG-LM has a finite set of rules, an infinite set of ASTs is only possible if the ASTs are cyclic in the following sense.", "Recall that R is the set of rules of the input G to the value computation algorithm (Alg.", "1).", "Let ρ ∈ (R ) * .", "We call ρ cyclic if |ρ| ≥ 2, ρ 1 = ρ |ρ| , and for every i, j ∈ N, if 1 ≤ i < j < |ρ|, then ρ i ρ j .", "From now on, let ρ ∈ (R ) * be cyclic, d ∈ T R , and c ∈ N. A path p in d is (c, ρ)-cyclic if ρ occurs exactly c times in seq (d, p) .", "We define the set cutout(d, ρ) which contains every tree obtained from d by cutting out at least one occurrence of ρ.", "We illustrate cutout by an example in Fig.", "5 .", "Definition 5.", "Let c ∈ N. A wRTG-LM G = (G , CFG ∅ ), K, wt is c-closed if K is distributive and d-complete, and for each d ∈ T R and cyclic string ρ ∈ (R ) * the following holds: if there is a (c, ρ)-cyclic path in d, then R ∩AST(G ) for every c ∈ N. Theorem 6.", "For every c ∈ N and c-closed wRTG-LM (G , CFG ∅ ), K, wt the following holds: wt (d) K ⊕ d ∈cutout(d,ρ) wt (d ) K = d ∈cutout(d,ρ) wt (d ) K .", "G is closed if it is c-closed for some c ∈ N. ⊕ d∈AST(G ) wt (d) K = d∈AST(G ) (c) wt (d) K .", "Proof (sketch).", "As K is distributive, we can show by induction on n ∈ N that for every B ⊆ AST(G ) AST(G ) (c) with |B| = n, adding B to the index set of ⊕ does not change the sum's value.", "Then, as K is d-complete, the equality holds.", "This theorem reflects the desired property: given that our wRTG-LM is c-closed (with c ∈ N), each (possibly infinite) sum over all ASTs can be computed as a sum over the finite set AST(G ) (c) .", "Theorem 7.", "The value computation algorithm (Alg.", "1) is terminating and correct for every closed wRTG-LM G with language algebra CFG ∅ .", "Proof (sketch).", "Let G be c-closed.", "We note that in line 8, the value in the right-hand side of ⊕ always corresponds to the sum over the weights of some trees in (T R ) A ; this is due to the fact that K is distributive.", "By the form of recomputation in lines 3-12, each d ∈ (T R ) A contributes to that sum at most once.", "Furthermore, V only differs from V(A) if a tree from the finite set T (c) R has been used to compute V , but not V(A) (this is a consequence of G being closed).", "Thus, changed is only set to true finitely often and the algorithm eventually terminates.", "Then, after termination, V(A 0 ) = d∈AST(G ) (c) wt (d) K and Theorem 6 implies correctness.", "Properties of the weighted parsing algorithm We discuss two classes of wRTG-LMs for which the weighted parsing algorithm (Fig.", "4) is termi-nating and correct.", "(1) Closed wRTG-LMs with arbitrary language algebras.", "Each of them is a wRTG-LM (G, (L, φ)), (K, ⊕, 0, Ω, ⊕ ), wt which is c-closed for some c ∈ N, and c-closed is defined as in Def.", "5.", "(We note that this generalization is possible because Def.", "5 does not use any property of CFG ∅ .)", "The following particular wRTG-LMs are closed: • wRTG-LMs with acyclic RTG, where an RTG G is acyclic if AST(G) = AST(G) (0) , • wRTG-LMs with superior, d-complete Mmonoids as weight algebras, and • wRTG-LMs with weight algebra BD if no chain rule and ε-rule has probability 1.0 (as in Ex.", "3).", "(2) Non-looping wRTG-LMs with distributive Mmonoids as weight algebras.", "A wRTG-LM G is non-looping if for every syntactic object a and tree d over the set of rules of G which is evaluated to a the following holds: no proper subtree of d is evaluated to a. ADP problems can be specified by non-looping wRTG-LMs, because the syntactic objects of ADP represent (sub-)problems which have to be solved.", "Thus, if G is looping, then the solution of a subproblem would depend on itself, which contradicts dynamic programming.", "In general, non-looping is not decidable, but it is for particular language algebras, e.g., CFG ∆ .", "Lemma 8.", "For every closed or nonlooping wRTG-LM G with finitely decomposable language algebra and syntactic object a, the wRTG-LM cwds(G, a) is closed.", "Theorem 9.", "The weighted parsing algorithm (Fig.", "4) is terminating and correct for every closed or nonlooping wRTG-LM with finitely decomposable language algebra.", "Proof.", "The weighted parsing algorithm terminates because (a) the computation of cwds is terminating algorithm class of valid inputs class C 1 of RTG class C 2 of weight algebras (a) Knuth (1977) C 1 × C 2 RTG superior M-monoid (b) Goodman (1999) C 1 × C 2 acyclic RTG complete semiring (c) Mohri (2002) C 2 closed for C 1 monadic RTG commutative, d-complete semiring (d) Alg.", "1 closed wRTG-LM RTG distributive, d-complete M-monoid = V(A 0 ) .", "Comparison of value computation algorithms Here we compare our value computation algorithm (Alg.", "1) to the algorithm of Knuth (1977) , the second phase of Goodman (1999) , and the algorithm of Mohri (2002) .", "We focus on the question of applicability of the algorithms, i.e., we identify the classes of inputs for which the algorithms are terminating and correct (class of valid inputs).", "In order to have a basis for a fair comparison, we understand the inputs of the algorithms of Knuth (1977) , Goodman (1999) , and Mohri (2002) as particular wRTG-LMs of the form (G , CFG ∅ ), (K, ⊕, 0, Ω, ⊕ ), wt with G = (N , Σ , A 0 , R ).", "An algorithm is correct for such a wRTG-LM if it returns ⊕ d∈AST(G ) wt (d) K .", "We employ two parameters: C 1 (subset of the class of all RTGs) and C 2 (subset of the class of all weight algebras).", "Tab.", "1 shows the classes of valid inputs parameterized with values for C 1 and C 2 .", "Each valid input in rows (a)-(d) is a closed wRTG-LM.", "Thus, if one of the value computation algorithms (a)-(c) is applicable, then our value computation algorithm (Alg.", "1) is applicable too.", "In particular, Alg.", "1 is applicable to wRTG-LMs with the best derivation M-monoid BD as weight algebra (cf.", "Ex.", "3), which in general is the case for neither of algorithms (a)-(c).", "The reason for this is that BD is not superior (opposing (a)) and RTG-LMs are in general neither acyclic (opposing (b)) nor monadic (opposing (c)).", "The same holds for ADP problems.", "We cannot give a general statement about the complexity of our value computation algorithm (Alg.", "1), because the operations in the weight algebra of a wRTG-LM can be undecidable.", "If we abstract from the costs of particular operations, then we obtain the complexity of Mohri's algorithm.", "This complexity depends on the number of times the value of a nonterminal changes, which in general is not polynomial in the size of the input wRTG-LM.", "Mohri circumvents this problem by specifying the order in which nonterminals are processed for well-known classes of inputs, e.g., acyclic graphs or superior weight algebras.", "We can adapt this idea by imposing such an ordering on the iteration over the nonterminals in line 5.", "Thus our value computation algorithm achieves the same complexity as Knuth's algorithm (if the input is restricted to superior wRTG-LMs) or the algorithm in Goodman's second phase (if the input is restricted to acyclic wRTG-LMs), respectively.", "We note that although our value computation algorithm (Alg.", "1) has the same complexity as the other algorithms, in average it performs more computations than those.", "This is because in each iteration of lines 5-11, the values of all nonterminals are recomputed.", "This could be avoided by using a direct generalization of Mohri's algorithm to the branching case rather than Alg.", "1.", "However, the intricacies of such a generalization would exceed the scope of this paper." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Preliminaries", "The weighted parsing algorithm", "Termination and correctness", "Properties of the value computation algorithm", "Properties of the weighted parsing algorithm", "Comparison of value computation algorithms" ] }
GEM-SciDuet-train-55#paper-1103#slide-12
Closed wRTG LMs
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) Let . A wRTG-LM = ((,),,wt is -closed if is distributive and d-complete, and for each T and cyclic string the following holds: if there is a (, )-cyclic path in , then at most times closed, distributive, d-complete For every and -closed wRTG-LM ((,),,wt the following holds:
Richard Morbitz, Heiko Vogler: Weighted parsing for grammar-based language models (FSMNLP 2019) Let . A wRTG-LM = ((,),,wt is -closed if is distributive and d-complete, and for each T and cyclic string the following holds: if there is a (, )-cyclic path in , then at most times closed, distributive, d-complete For every and -closed wRTG-LM ((,),,wt the following holds:
[]
GEM-SciDuet-train-56#paper-1105#slide-0
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-0
Motivation
Attention vs. multi-task learning What is historical text normalization? Sample of a manuscript from Early New High German Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Attention vs. multi-task learning What is historical text normalization? Sample of a manuscript from Early New High German Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-1
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-1
A corpus of Early New High German
Attention vs. multi-task learning What is historical text normalization? I Medieval religious treatise Interrogatio Sancti Anselmi de Passione Domini I 50 manuscripts and Sample from an Anselm manuscript Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Attention vs. multi-task learning What is historical text normalization? I Medieval religious treatise Interrogatio Sancti Anselmi de Passione Domini I 50 manuscripts and Sample from an Anselm manuscript Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-2
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-2
Examples for historical spellings
Attention vs. multi-task learning What is historical text normalization? Frau (woman) fraw, frawe, frawe, frauwe, frauwe, frow, frouw, vraw, vrow, vorwe, vrauwe, vrouwe Kind (child) chind, chinde, chindt, chint, kind, kinde, kindi, kindt, kint, kinth, kynde, kynt Mutter (mother) moder, moeder, mueter, mueter, muoter, muotter, muter, mutter, mvoter, mvter, mweter Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce Normalization as the mapping of historical spellings to their modern-day equivalents.
Attention vs. multi-task learning What is historical text normalization? Frau (woman) fraw, frawe, frawe, frauwe, frauwe, frow, frouw, vraw, vrow, vorwe, vrauwe, vrouwe Kind (child) chind, chinde, chindt, chint, kind, kinde, kindi, kindt, kint, kinth, kynde, kynt Mutter (mother) moder, moeder, mueter, mueter, muoter, muotter, muter, mutter, mvoter, mvter, mweter Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce Normalization as the mapping of historical spellings to their modern-day equivalents.
[]
GEM-SciDuet-train-56#paper-1105#slide-3
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-3
Previous work
Attention vs. multi-task learning What is historical text normalization? I Character-based statistical machine translation (CSMT) I Sequence labelling with neural networks I Bollmann and Sgaard (2016) Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce I Now: Character-based neural machine translation
Attention vs. multi-task learning What is historical text normalization? I Character-based statistical machine translation (CSMT) I Sequence labelling with neural networks I Bollmann and Sgaard (2016) Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce I Now: Character-based neural machine translation
[]
GEM-SciDuet-train-56#paper-1105#slide-4
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-4
An encoder decoder model
Attention vs. multi-task learning k i n d E Embeddings S k i n d Embeddings c h i n t Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce Evaluation on 43 texts from the Anselm corpus
Attention vs. multi-task learning k i n d E Embeddings S k i n d Embeddings c h i n t Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce Evaluation on 43 texts from the Anselm corpus
[]
GEM-SciDuet-train-56#paper-1105#slide-5
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-5
Attentional model
Attention vs. multi-task learning k i n d E S k i n d c h i n t Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce Beam + Filter + Attention Evaluation on 43 texts from the Anselm corpus
Attention vs. multi-task learning k i n d E S k i n d c h i n t Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce Beam + Filter + Attention Evaluation on 43 texts from the Anselm corpus
[]
GEM-SciDuet-train-56#paper-1105#slide-6
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-6
Learning to pronounce
Attention vs. multi-task learning Can we improve results with multi-task learning? I Idea: grapheme-to-phoneme mapping as auxiliary task I CELEX 2 lexical database (Baayen et al., 1995) I Sample mappings for German: Abend nicht ab@nt nIxt
Attention vs. multi-task learning Can we improve results with multi-task learning? I Idea: grapheme-to-phoneme mapping as auxiliary task I CELEX 2 lexical database (Baayen et al., 1995) I Sample mappings for German: Abend nicht ab@nt nIxt
[]
GEM-SciDuet-train-56#paper-1105#slide-7
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-7
Multi task learning
Prediction layer for CELEX task k i n d E Prediction layer for historical task S k i n d c h i n t Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce n I x t E S n I x t n i c h t Beam + Filter + Attention
Prediction layer for CELEX task k i n d E Prediction layer for historical task S k i n d c h i n t Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce n I x t E S n I x t n i c h t Beam + Filter + Attention
[]
GEM-SciDuet-train-56#paper-1105#slide-8
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-8
Why does MTL not improve with attention
Attention vs. multi-task learning Attention and MTL learn similar functions of the input data. MTL can be used to coerce the learner to attend to patterns in the input it would otherwise ignore. This is done by forcing it to learn internal representations to support related tasks that depend on such patterns. Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Attention vs. multi-task learning Attention and MTL learn similar functions of the input data. MTL can be used to coerce the learner to attend to patterns in the input it would otherwise ignore. This is done by forcing it to learn internal representations to support related tasks that depend on such patterns. Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-9
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-9
Comparing the model outputs
Attention vs. multi-task learning Base model prandert pranget gewarnt uberbroch uberbrache uber ubergebe sollt sollt sollt sollte B gewarntet gewarntet gewarnt gewand uberbeh ubereube ubergebe uber sollte sollte sollte sollte Target gewarnt uberhob sollte Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Attention vs. multi-task learning Base model prandert pranget gewarnt uberbroch uberbrache uber ubergebe sollt sollt sollt sollte B gewarntet gewarntet gewarnt gewand uberbeh ubereube ubergebe uber sollte sollte sollte sollte Target gewarnt uberhob sollte Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-10
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-10
Saliency plots
Attention vs. multi-task learning for words 7 characters, Attention/MTL correlate most Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Attention vs. multi-task learning for words 7 characters, Attention/MTL correlate most Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-11
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-11
Conclusion
Attention vs. multi-task learning I Encoder/decoder models for historical text normalization I Beam search & attention improve results further I MTL with grapheme-to-phoneme task helps I Attention and MTL have a similar effect I Can this be reproduced on other tasks? I What factors affect this (choice of attention mechanism/auxiliary task/. . . )? Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Attention vs. multi-task learning I Encoder/decoder models for historical text normalization I Beam search & attention improve results further I MTL with grapheme-to-phoneme task helps I Attention and MTL have a similar effect I Can this be reproduced on other tasks? I What factors affect this (choice of attention mechanism/auxiliary task/. . . )? Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-12
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-12
Dealing with spelling variation
The problems. . . Normalization. . . I Difficult to annotate with tools aimed at modern data I High variance in spelling I None/very little training I Enables re-using of I Useful annotation layer (e.g. for corpus query) Normalization as the mapping of historical spellings to their modern-day equivalents. Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
The problems. . . Normalization. . . I Difficult to annotate with tools aimed at modern data I High variance in spelling I None/very little training I Enables re-using of I Useful annotation layer (e.g. for corpus query) Normalization as the mapping of historical spellings to their modern-day equivalents. Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-13
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-13
Attention mechanism details
I Attention mechanism follows Xu et al. (2015) ct ft ht ot ct1 it gt tanh(ct) Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
I Attention mechanism follows Xu et al. (2015) ct ft ht ot ct1 it gt tanh(ct) Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-56#paper-1105#slide-14
1105
Learning attention for historical text normalization by learning to pronounce
Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183 ], "paper_content_text": [ "Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents.", "A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012) , which is the mapping of historical spelling variants to standardized/modernized forms (e.g.", "vnd → und 'and') .", "Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data.", "Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder-decoder recurrent neural networks (RNNs) to induce our transduction models.", "This is similar to models that have been proposed for neural machine translation (e.g., Cho et al.", "(2014) ), so essentially, our approach could also be considered a specific case of character-based neural machine translation.", "By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model's complexity and the amount of data required to train it effectively.", "Using an encoder-decoder architecture removes the need for an explicit character alignment between historical and modern wordforms.", "Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models.", "We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German.", "Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization.", "• We evaluate several such architectures across 44 datasets of Early New High German.", "• We show that such architectures benefit from bidirectional encoding, beam search, and attention.", "• We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention.", "• We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant.", "• We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017.", "In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning.", "Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.", "1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise.", "Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics.", "For example, the modern German word Frau 'woman' can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.", "2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al.", "(2015) .", "For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions).", "Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens.", "For all texts, we removed tokens that consisted solely of punctuation characters.", "We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts.", "Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers.", "1 https://www.linguistics.rub.de/ anselm/ 2 We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf.", "the website).", "Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task.", "This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes.", "We use the German part of the CELEX lexical database (Baayen et al., 1995) , particularly the database of phonetic transcriptions of German wordforms.", "The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs).", "For example, the word Jungfrau 'virgin' is represented as 'jUN-frB.", "Model Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al.", "(2014) .", "It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder's output and generates a probability distribution over the output classes at each timestep.", "For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997) .", "LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks.", "We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder.", "By using this encoder-decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output Figure 1 : Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top.", "Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model.", "Embedding layers for the inputs are not explicitly shown.", "pairs of different lengths.", "Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows.", "An example illustration of the unrolled network is shown in Fig.", "1 .", "Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms.", "We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y 1 , ..., y n ) is the correct output word (as a list of one-hot vectors of output characters) and y = (ŷ 1 , ...,ŷ n ) is the model's output, we minimize the mean loss − n i=1 y i logŷ i over all training samples.", "For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003.", "To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters.", "This only affects 172 samples across the whole dataset, and is only done during training.", "In other words, we evaluate our models across all the test examples.", "Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep.", "This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is non-sensical.", "We therefore also experiment with beam search decoding, setting the beam size to 5.", "Finally, we also experiment with using a lexical filter during the decoding step.", "Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon.", "This is again intended to reduce the occurrence of nonsensical outputs.", "For the lexicon, we use all word forms from CELEX (cf.", "Sec.", "2) plus the target word forms from the training set.", "3 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence.", "This is a strong assumption, especially with long input sequences.", "Attention mechanisms give us more flexibility.", "The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to \"attend\" to different parts of the input character sequence at each time step of the output generation.", "Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far.", "Our implementation is identical to the decoder with soft attention described by Xu et al.", "(2015) .", "If a = (a 1 , ..., a n ) is the encoder's output and h t is the decoder's hidden state at timestep t, we first calculate a context vectorẑ t as a weighted combination of the output vectors a i : z t = n i=1 α i a i (1) The weights α i are derived by feeding the encoder's output and the decoder's hidden state from the previous timestep into a multilayer perceptron, called the attention model (f att ): α = sof tmax(f att (a, h t−1 )) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state h t−1 and the previously predicted output character y t−1 , but also on the context vectorẑ t : i t = σ(W i [h t−1 , y t−1 ,ẑ t ] + b i ) f t = σ(W f [h t−1 , y t−1 ,ẑ t ] + b f ) o t = σ(W o [h t−1 , y t−1 ,ẑ t ] + b o ) g t = tanh(W g [h t−1 , y t−1 ,ẑ t ] + b g ) c t = f t c t−1 + i t g t h t = o t tanh(c t ) (3) In Eq.", "3, we follow the traditional LSTM description consisting of input gate i t , forget gate f t , output gate o t , cell state c t and hidden state h t , where W and b are trainable parameters.", "For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers.", "While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component.", "Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993 ).", "The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks.", "Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes.", "This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation.", "We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task.", "We use the encoderdecoder to generate a corresponding output se-quence, whether a modern word form or a pronunciation.", "Doing so, we suffer a loss with respect to the true output sequence and update the model parameters.", "The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers.", "Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters.", "This manuscript is left out of the averages reported below.", "We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters.", "For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only).", "All these parameters were set on the B manuscript alone.", "Implementation We implemented all of the models in Keras (Chollet, 2015) .", "Any parameters not explicitly described here were left at their default values in Keras v1.0.8.", "Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training.", "We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually.", "Baselines We compare our architectures to several competitive baselines.", "Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec.", "3.3) to align input and output characters.", "Our second baseline uses the same alignment, but trains a deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016).", "We evaluate this tagger using both standard and multi-task learning.", "Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012) .", "4 Word accuracy We use word-level accuracy as our evaluation metric.", "While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful.", "Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores).", "We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines.", "All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016) .", "We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture -with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms.", "For our multi-task architecture, we also observe gains when we add beam search and filtering, but importantly, adding attention does not help.", "In fact, attention hurts the performance of our multitask architecture quite significantly.", "Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention.", "We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1 , but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention.", "This is the hypothesis that we will try to validate in Sec.", "5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks.", "Sample predictions A small selection of predictions from our models is shown in Table 2 .", "They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others.", "Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging '(he) fared', while decoding without a filter produces the non-word erbiggen.", "Even for herczenlichen (modern herzlichen 'heartfelt'), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes 'heartily loved').", "In some cases (such as gewarnet 'warned'), only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g.", "dicke, herzel).", "We will investigate this property further in Sec.", "5.", "Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text.", "Fig.", "2 shows the learned character embeddings.", "In the representations from the base model ( Fig.", "2a ), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text.", "Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals.", "On the other hand, the MTL model shows a better generalization of the training data ( Fig.", "2b) : here, <u> is grouped closer to other vowel characters and far away from <v>/<f>.", "Also, <n> and <m> are now in close proximity.", "We can also visualize the internal word representations that are produced by the encoder (Fig.", "3) .", "Here, we chose words that demonstrate the interchangeable use of <u> and <v>.", "Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>.", "However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization.", "In the MTL model, however, these examples are indeed clustered together.", "5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy.", "However, we observe a decline in word accuracy for models that combine multi-task learning with attention.", "A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998) .", "We put this hypothesis to the test by closely investigating properties of the individual models below.", "Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities.", "We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.", "5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case).", "With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes).", "We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson's r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as high as 96.", "Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset.", "Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system.", "We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors.", "Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average.", "Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models).", "Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen's kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning).", "Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models.", "We follow Li et al.", "(2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models.", "The higher the saliency of an input timestep, the more important it is in determining the model's prediction at a given output timestep.", "Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction.", "Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model.", "Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen -zeichen 'sign'.", "Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them.", "A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥ 7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21.", "Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013) .", "A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; Sánchez-Martínez et al., 2013; Scherrer and Erjavec, 2013; or dialectal data (Scherrer and Ljubešić, 2016) .", "This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks.", "Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014) .", "Neural networks have rarely been applied to historical spelling normalization so far.", "Azawi et al.", "(2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms.", "Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step.", "Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016) .", "It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016) , though so far not with attentional decoders.", "Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines.", "Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016) , without requiring a prior character alignment.", "Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task.", "We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms.", "We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention.", "Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model.", "Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in 'in' or ihn 'him') and conceivably makes the task harder for others.", "Reranking the predictions with a language model could be one possible way to improve on this.", ", for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also intro-duces context.", "Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text.", "A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset.", "Table 3 shows token counts, a rough classification of each text's dialectal region, and the results for the baseline methods.", "Table 4 presents the full results for our encoder-decoder models.", "ID Base model Multi-task learning model Table 4 : Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec.", "3) and the multi-task model.", "G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model.", "Best results (also taking into account the baseline results from Table 3 ) shown in bold." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "4", "4.1", "4.2", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Datasets", "Base model", "Training", "Decoding", "Attention", "Multi-task learning", "Hyperparameters", "Implementation", "Evaluation", "Word accuracy", "Learned vector representations", "Model parameters", "Final output", "Saliency analysis", "Related Work", "Conclusion and Future Work" ] }
GEM-SciDuet-train-56#paper-1105#slide-14
Differences of learned parameters
Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
Marcel Bollmann, Joachim Bingel, Anders Sgaard Learning attention for hist. normalization by learning to pronounce
[]
GEM-SciDuet-train-57#paper-1106#slide-0
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-0
AMR graph
Parse to AMR Generate from AMR
Parse to AMR Generate from AMR
[]
GEM-SciDuet-train-57#paper-1106#slide-1
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-1
Abstract Meaning Representation
Rooted Directed Acyclic Graph Nodes: concepts (nouns, verbs, named entities, etc) Edges: Semantic Role Labels I have known a planet that was inhabited by a lazy man. inhabit ARG0 ARG1 I knew a planet that was inhabited by a lazy man. I planet ARG1-of Generate from AMR inhabit man I know a planet. It is inhabited by a lazy man. mod Parse to AMR inhabit
Rooted Directed Acyclic Graph Nodes: concepts (nouns, verbs, named entities, etc) Edges: Semantic Role Labels I have known a planet that was inhabited by a lazy man. inhabit ARG0 ARG1 I knew a planet that was inhabited by a lazy man. I planet ARG1-of Generate from AMR inhabit man I know a planet. It is inhabited by a lazy man. mod Parse to AMR inhabit
[]
GEM-SciDuet-train-57#paper-1106#slide-2
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-2
Applications
Text Summarization (Liu et al., 2015) Parse sentences: summary sentence AMR graphs: AMR graph: Source The children told that lie Target sono uso-wa kodomo-tachi-ga tsui-ta that lie-TOP child-and others-NOM breathe out-PAST Machine Translation (Jones et al., 2012) child lie ARG0-of that Parse AMR graph: ARG0 ARG1 ARG1 ARG0 child lie Graph-to-graph transformation: tachi kodomo ARG0-of ARG0-of that sono Parse AMR graph: Generate translation:
Text Summarization (Liu et al., 2015) Parse sentences: summary sentence AMR graphs: AMR graph: Source The children told that lie Target sono uso-wa kodomo-tachi-ga tsui-ta that lie-TOP child-and others-NOM breathe out-PAST Machine Translation (Jones et al., 2012) child lie ARG0-of that Parse AMR graph: ARG0 ARG1 ARG1 ARG0 child lie Graph-to-graph transformation: tachi kodomo ARG0-of ARG0-of that sono Parse AMR graph: Generate translation:
[]
GEM-SciDuet-train-57#paper-1106#slide-3
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-3
Existing Approaches
Barzdins and Gosko 2016, Peng et al. 2017, Noord and Bos 2017, Buys and Blunsom
Barzdins and Gosko 2016, Peng et al. 2017, Noord and Bos 2017, Buys and Blunsom
[]
GEM-SciDuet-train-57#paper-1106#slide-4
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-4
Sequence to sequence model
A know knew planet a planet man inhabit inhabited was input Encoder Decoder output know ARG0 I ARG1 planet ARG1-of inhabit <s> I know the planet of
A know knew planet a planet man inhabit inhabited was input Encoder Decoder output know ARG0 I ARG1 planet ARG1-of inhabit <s> I know the planet of
[]
GEM-SciDuet-train-57#paper-1106#slide-5
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-5
Linearization
Graph -> Depth First Search (Human-authored annotation) ARG0 ARG1 time location person meet date-entity city ARG0-of ARG0 year month name have-role person ARG1 ARG2 New York ARG1-of ARG2-of country official name expert group United States US officials held an expert group meeting in January 2002 in New York .
Graph -> Depth First Search (Human-authored annotation) ARG0 ARG1 time location person meet date-entity city ARG0-of ARG0 year month name have-role person ARG1 ARG2 New York ARG1-of ARG2-of country official name expert group United States US officials held an expert group meeting in January 2002 in New York .
[]
GEM-SciDuet-train-57#paper-1106#slide-6
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-6
Pre processing
ARG0 ARG1 time location person meet date-entity city ARG0-of ARG0 year month name have-role person ARG1 ARG2 New York ARG1-of ARG2-of country official name expert group United States US officials held an expert group meeting in January 2002 in New York . loc_0 officials held an expert group meeting in month_0 year_0 in loc_1
ARG0 ARG1 time location person meet date-entity city ARG0-of ARG0 year month name have-role person ARG1 ARG2 New York ARG1-of ARG2-of country official name expert group United States US officials held an expert group meeting in January 2002 in New York . loc_0 officials held an expert group meeting in month_0 year_0 in loc_1
[]
GEM-SciDuet-train-57#paper-1106#slide-7
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-7
Experimental Setup
Hand annotated MR graphs: newswire, forums ~16k training / 1k development / 1k test pairs BLEU n-gram precision (Generation)
Hand annotated MR graphs: newswire, forums ~16k training / 1k development / 1k test pairs BLEU n-gram precision (Generation)
[]
GEM-SciDuet-train-57#paper-1106#slide-8
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-8
Experiments
Limited Language Model Capacity
Limited Language Model Capacity
[]
GEM-SciDuet-train-57#paper-1106#slide-9
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-9
First Attempt Generation
TreeToStr: Flanigan et al, NAACL TSP: Song et al, EMNLP 2016 PBMT: Pourdamaghani and Knight, INLG 2016 (Sennrich et al., ACL 2016) Language trained large systems We will emulate via data augmentation.
TreeToStr: Flanigan et al, NAACL TSP: Song et al, EMNLP 2016 PBMT: Pourdamaghani and Knight, INLG 2016 (Sennrich et al., ACL 2016) Language trained large systems We will emulate via data augmentation.
[]
GEM-SciDuet-train-57#paper-1106#slide-10
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-10
What went wrong
US officials held an expert group meeting in January United States officials held held a meeting in Coverage a) Sparsity b) Avg sent length: 20 words c) Limited Language
US officials held an expert group meeting in January United States officials held held a meeting in Coverage a) Sparsity b) Avg sent length: 20 words c) Limited Language
[]
GEM-SciDuet-train-57#paper-1106#slide-11
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-11
Data Augmentation
Original Dataset: ~16k graph-sentence pairs Gigaword: ~183M sentences *only* Sample sentences with vocabulary overlap Parse to AMR Generate from AMR
Original Dataset: ~16k graph-sentence pairs Gigaword: ~183M sentences *only* Sample sentences with vocabulary overlap Parse to AMR Generate from AMR
[]
GEM-SciDuet-train-57#paper-1106#slide-12
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-12
Semi supervised Learning
Sogaard and Rishoj, 2010
Sogaard and Rishoj, 2010
[]
GEM-SciDuet-train-57#paper-1106#slide-13
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-13
Paired Training
Train AMR Parser P on Original Dataset Si =Sample k 10i sentences from Gigaword Parse Si sentences with P Re-train AMR Parser P on Si Train Generator G on SN
Train AMR Parser P on Original Dataset Si =Sample k 10i sentences from Gigaword Parse Si sentences with P Re-train AMR Parser P on Si Train Generator G on SN
[]
GEM-SciDuet-train-57#paper-1106#slide-14
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-14
Training AMR Parser
Fine-tune: init parameters from previous step and train on Original Dataset Original Dataset sentences from Gigaword Original Dataset Parse S1 with P Train P on S2=2M
Fine-tune: init parameters from previous step and train on Original Dataset Original Dataset sentences from Gigaword Original Dataset Parse S1 with P Train P on S2=2M
[]
GEM-SciDuet-train-57#paper-1106#slide-15
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-15
Training AMR Generator
Fine-tune: init parameters from previous step and train on Original Dataset Original Dataset Parse S4 with P
Fine-tune: init parameters from previous step and train on Original Dataset Original Dataset Parse S4 with P
[]
GEM-SciDuet-train-57#paper-1106#slide-16
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-16
Final Results Generation
TreeToStr: Flanigan et al, NAACL 2016 TSP: Song et al, EMNLP 2016 PBMT: Pourdamaghani and Knight, INLG 2016
TreeToStr: Flanigan et al, NAACL 2016 TSP: Song et al, EMNLP 2016 PBMT: Pourdamaghani and Knight, INLG 2016
[]
GEM-SciDuet-train-57#paper-1106#slide-17
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-17
Final Results Parsing
SBMT CharLSTM+CAMR Seq2Seq NeuralAMR-20M
SBMT CharLSTM+CAMR Seq2Seq NeuralAMR-20M
[]
GEM-SciDuet-train-57#paper-1106#slide-18
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-18
How did we do Generation
US officials held an expert group meeting in January 2002 in New York . In January 2002 United States officials held a meeting of the group experts in New York . The report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus. The report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .
US officials held an expert group meeting in January 2002 in New York . In January 2002 United States officials held a meeting of the group experts in New York . The report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus. The report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .
[]
GEM-SciDuet-train-57#paper-1106#slide-19
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-19
Summary
Sequence-to-sequence models for Parsing and Generation Paired Training: scalable data augmentation algorithm Achieve state-of-the-art performance on generating from AMR Best-performing Neural AMR Parser Demo, Code and Pre-trained Models: http://ikonstas.net
Sequence-to-sequence models for Parsing and Generation Paired Training: scalable data augmentation algorithm Achieve state-of-the-art performance on generating from AMR Best-performing Neural AMR Parser Demo, Code and Pre-trained Models: http://ikonstas.net
[]
GEM-SciDuet-train-57#paper-1106#slide-21
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-21
Encoding
Linearize -> RNN encoding hold ARG0 person ARG0-of Recurrent Neural Network (RNN)
Linearize -> RNN encoding hold ARG0 person ARG0-of Recurrent Neural Network (RNN)
[]
GEM-SciDuet-train-57#paper-1106#slide-22
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-22
Decoding
US a the meeting US person expert meeting meetings meet w11: Holding w21: Hold a wk1: The US officials held w12: Helds w22: Hold the wk2: US officials held a w13: Hold w23: Held a wk3: US officials hold the US w14: w24: Held the wk4: US officials will hold a
US a the meeting US person expert meeting meetings meet w11: Holding w21: Hold a wk1: The US officials held w12: Helds w22: Hold the wk2: US officials held a w13: Hold w23: Held a wk3: US officials hold the US w14: w24: Held the wk4: US officials will hold a
[]
GEM-SciDuet-train-57#paper-1106#slide-23
1106
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequencebased AMR models are robust against ordering variations of graph-to-sequence conversions. 157
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ], "paper_content_text": [ "Introduction Abstract Meaning Representation (AMR) is a semantic formalism to encode the meaning of natural language text.", "As shown in Figure 1 , AMR represents the meaning using a directed graph while abstracting away the surface forms in text.", "AMR has been used as an intermediate meaning representation for several applications including machine translation (MT) (Jones et al., 2012) , summarization (Liu et al., 2015) , sentence compression (Takase et al., 2016) , and event extraction (Huang et al., 2016) .", "While AMR allows for rich semantic representation, annotating training data in AMR is expensive, which in turn limits the use Obama was elected and his voters celebrated Figure 1 : An example sentence and its corresponding Abstract Meaning Representation (AMR).", "AMR encodes semantic dependencies between entities mentioned in the sentence, such as \"Obama\" being the \"arg0\" of the verb \"elected\".", "of neural network models (Misra and Artzi, 2016; Peng et al., 2017; Barzdins and Gosko, 2016) .", "In this work, we present the first successful sequence-to-sequence (seq2seq) models that achieve strong results for both text-to-AMR parsing and AMR-to-text generation.", "Seq2seq models have been broadly successful in many other applications (Wu et al., 2016; Bahdanau et al., 2015; Luong et al., 2015; Vinyals et al., 2015) .", "However, their application to AMR has been limited, in part because effective linearization (encoding graphs as linear sequences) and data sparsity were thought to pose significant challenges.", "We show that these challenges can be easily overcome, by demonstrating that seq2seq models can be trained using any graph-isomorphic linearization and that unlabeled text can be used to significantly reduce sparsity.", "Our approach is two-fold.", "First, we introduce a novel paired training procedure that enhances both the text-to-AMR parser and AMR-to-text generator.", "More concretely, first we use self-training to bootstrap a high quality AMR parser from millions of unlabeled Gigaword sentences (Napoles et al., 2012) and then use the automatically parsed AMR graphs to pre-train an AMR generator.", "This paired training allows both the parser and generator to learn high quality representations of fluent English text from millions of weakly labeled examples, that are then fine-tuned using human annotated AMR data.", "Second, we propose a preprocessing procedure for the AMR graphs, which includes anonymizing entities and dates, grouping entity categories, and encoding nesting information in concise ways, as illustrated in Figure 2(d) .", "This preprocessing procedure helps overcoming the data sparsity while also substantially reducing the complexity of the AMR graphs.", "Under such a representation, we show that any depth first traversal of the AMR is an effective linearization, and it is even possible to use a different random order for each example.", "Experiments on the LDC2015E86 AMR corpus (SemEval-2016 Task 8) demonstrate the effectiveness of the overall approach.", "For parsing, we are able to obtain competitive performance of 62.1 SMATCH without using any external annotated examples other than the output of a NER system, an improvement of over 10 points relative to neural models with a comparable setup.", "For generation, we substantially outperform previous best results, establishing a new state of the art of 33.8 BLEU.", "We also provide extensive ablative and qualitative analysis, quantifying the contributions that come from preprocessing and the paired training procedure.", "Related Work Alignment-based Parsing Flanigan et al.", "(2014) (JAMR) pipeline concept and relation identification with a graph-based algorithm.", "extend JAMR by performing the concept and relation identification tasks jointly with an incremental model.", "Both systems rely on features based on a set of alignments produced using bi-lexical cues and hand-written rules.", "In contrast, our models train directly on parallel corpora, and make only minimal use of alignments to anonymize named entities.", "Grammar-based Parsing (CAMR) perform a series of shift-reduce transformations on the output of an externally-trained dependency parser, similar to Damonte et al.", "(2017) , Brandt et al.", "(2016 ), Puzikov et al.", "(2016 ), and Goodman et al.", "(2016 .", "Artzi et al.", "(2015) use a grammar induction approach with Combinatory Categorical Grammar (CCG), which relies on pretrained CCGBank categories, like Bjerva et al.", "(2016) .", "Pust et al.", "(2015) recast parsing as a string-to-tree Machine Translation problem, using unsupervised alignments (Pourdamghani et al., 2014) , and employing several external semantic resources.", "Our neural approach is engineering lean, relying only on a large unannotated corpus of English and algorithms to find and canonicalize named entities.", "Neural Parsing Recently there have been a few seq2seq systems for AMR parsing (Barzdins and Gosko, 2016; Peng et al., 2017) .", "Similar to our approach, Peng et al.", "(2017) deal with sparsity by anonymizing named entities and typing low frequency words, resulting in a very compact vocabulary (2k tokens).", "However, we avoid reducing our vocabulary by introducing a large set of unlabeled sentences from an external corpus, therefore drastically lowering the out-of-vocabulary rate (see Section 6).", "Flanigan et al.", "(2016) specify a number of tree-to-string transduction rules based on alignments and POS-based features that are used to drive a tree-based SMT system.", "Pourdamghani et al.", "(2016) also use an MT decoder; they learn a classifier that linearizes the input AMR graph in an order that follows the output sentence, effectively reducing the number of alignment crossings of the phrase-based decoder.", "Song et al.", "(2016) recast generation as a traveling salesman problem, after partitioning the graph into fragments and finding the best linearization order.", "Our models do not need to rely on a particular linearization of the input, attaining comparable performance even with a per example random traversal of the graph.", "Finally, all three systems intersect with a large language model trained on Gigaword.", "We show that our seq2seq model has the capacity to learn the same information as a language model, especially after pretraining on the external corpus.", "AMR Generation Data Augmentation Our paired training procedure is largely inspired by Sennrich et al.", "(2016) .", "They improve neural MT performance for low resource language pairs by using a back-translation MT system for a large monolingual corpus of the target language in order to create synthetic output, and mixing it with the human translations.", "We instead pre-train on the external corpus first, and then fine-tune on the original dataset.", "Methods In this section, we first provide the formal definition of AMR parsing and generation (section 3.1).", "Then we describe the sequence-to-sequence models we use (section 3.2), graph-to-sequence conversion (section 3.3), and our paired training procedure (section 3.4).", "Tasks We assume access to a training dataset D where each example pairs a natural language sentence s with an AMR a.", "The AMR is a rooted directed acylical graph.", "It contains nodes whose names correspond to sense-identified verbs, nouns, or AMR specific concepts, for example elect.01, Obama, and person in Figure 1 .", "One of these nodes is a distinguished root, for example, the node and in Figure 1 .", "Furthermore, the graph contains labeled edges, which correspond to PropBank-style (Palmer et al., 2005) semantic roles for verbs or other relations introduced for AMR, for example, arg0 or op1 in Figure 1 .", "The set of node and edge names in an AMR graph is drawn from a set of tokens C, and every word in a sentence is drawn from a vocabulary W .", "We study the task of training an AMR parser, i.e., finding a set of parameters θ P for model f , that predicts an AMR graphâ, given a sentence s: a = argmax a f a|s; θ P (1) We also consider the reverse task, training an AMR generator by finding a set of parameters θ G , for a model f that predicts a sentenceŝ, given an AMR graph a: s = argmax s f s|a; θ G (2) In both cases, we use the same family of predictors f , sequence-to-sequence models that use global attention, but the models have independent parameters, θ P and θ G .", "Sequence-to-sequence Model For both tasks, we use a stacked-LSTM sequenceto-sequence neural architecture employed in neural machine translation (Bahdanau et al., 2015; Wu et al., 2016 ).", "1 Our model uses a global attention decoder and unknown word replacement with small modifications (Luong et al., 2015) .", "The model uses a stacked bidirectional-LSTM encoder to encode an input sequence and a stacked LSTM to decode from the hidden states produced by the encoder.", "We make two modifications to the encoder: (1) we concatenate the forward and backward hidden states at every level of the stack instead of at the top of the stack, and (2) introduce dropout in the first layer of the encoder.", "The decoder predicts an attention vector over the encoder hidden states using previous decoder states.", "The attention is used to weigh the hidden states of the encoder and then predict a token in the output sequence.", "The weighted hidden states, the decoded token, and an attention signal from the previous time step (input feeding) are then fed together as input to the next decoder state.", "The decoder can optionally choose to output an unknown word symbol, in which case the predicted attention is used to copy a token directly from the input sequence into the output sequence.", "Linearization Our seq2seq models require that both the input and target be presented as a linear sequence of tokens.", "We define a linearization order for an AMR graph as any sequence of its nodes and edges.", "A linearization is defined as (1) a linearization order and (2) a rendering function that generates any number of tokens when applied to an element in the linearization order (see Section 4.2 for implementation details).", "Furthermore, for parsing, a valid AMR graph must be recoverable from the linearization.", "Paired Training Obtaining a corpus of jointly annotated pairs of sentences and AMR graphs is expensive and current datasets only extend to thousands of examples.", "Neural sequence-to-sequence models suffer from sparsity with so few training pairs.", "To reduce the effect of sparsity, we use an external unannotated corpus of sentences S e , and a procedure which pairs the training of the parser and generator.", "Our procedure is described in Algorithm 1, and first trains a parser on the dataset D of pairs of sentences and AMR graphs.", "Then it uses self-training Algorithm 1 Paired Training Procedure Input: Training set of sentences and AMR graphs (s, a) ∈ D, an unannotated external corpus of sentences Se, a number of self training iterations, N , and an initial sample size k. Output: Model parameters for AMR parser θP and AMR generator θG.", "1: θP ← Train parser on D Self-train AMR parser.", "2: S 1 e ← sample k sentences from Se 3: for i = 1 to N do 4: A i e ← Parse S i e using parameters θP Pre-train AMR parser.", "5: θP ← Train parser on (A i e , S i e ) Fine tune AMR parser.", "6: θP ← Train parser on D with initial parameters θP 7: S i+1 e ← sample k · 10 i new sentences from Se 8: end for 9: S N e ← sample k · 10 N new sentences from Se Pre-train AMR generator.", "10: Ae ← Parse S N e using parameters θP 11: θG ← Train generator on (A N e , S N e ) Fine tune AMR generator.", "12: θG ← Train generator on D using initial parameters θG 13: return θP , θG to improve the initial parser.", "Every iteration of self-training has three phases: (1) parsing samples from a large, unlabeled corpus S e , (2) creating a new set of parameters by training on S e , and (3) fine-tuning those parameters on the original paired data.", "After each iteration, we increase the size of the sample from S e by an order of magnitude.", "After we have the best parser from self-training, we use it to label AMRs for S e and pre-train the generator.", "The final step of the procedure fine-tunes the generator on the original dataset D. AMR Preprocessing We use a series of preprocessing steps, including AMR linerization, anonymization, and other modifications we make to sentence-graph pairs.", "Our methods have two goals: (1) reduce the complexity of the linearized sequences to make learning easier while maintaining enough original information, and (2) Graph Simplification In order to reduce the overall length of the linearized graph, we first remove variable names and the instance-of relation ( / ) before every concept.", "In case of re-entrant nodes we replace the variable mention with its co-referring concept.", "Even though this replacement incurs loss of information, often the surrounding context helps recover the correct realization, e.g., the possessive role :poss in the example of Figure 1 is strongly correlated with the surface form his.", "Following Pourdamghani et al.", "(2016) we also remove senses from all concepts for AMR generation only.", "Figure 2 (a) contains an example output after this stage.", "Anonymization of Named Entities Open-class types including NEs, dates, and numbers account for 9.6% of tokens in the sentences of the training corpus, and 31.2% of vocabulary W .", "83.4% of them occur fewer than 5 times in the dataset.", "In order to reduce sparsity and be able to account for new unseen entities, we perform extensive anonymization.", "First, we anonymize sub-graphs headed by one of AMR's over 140 fine-grained entity types that contain a :name role.", "This captures structures referring to entities such as person, country, miscellaneous entities marked with * -enitity, and typed numerical values, * -quantity.", "We exclude date entities (see the next section).", "We then replace these sub-graphs with a token indicating fine-grained type and an index, i, indicating it is the ith occurrence of that type.", "2 For example, in Figure 2 the sub-graph headed by country gets replaced with country 0.", "On the training set, we use alignments obtained using the JAMR aligner (Flanigan et al., 2014) and the unsupervised aligner of Pourdamghani et al.", "(2014) in order to find mappings of anonymized subgraphs to spans of text and replace mapped text with the anonymized token that we inserted into the AMR graph.", "We record this mapping for use during testing of generation models.", "If a generation model predicts an anonymization token, we find the corresponding token in the AMR graph and replace the model's output with the most frequent mapping observed during training for the entity name.", "If the entity was never observed, we copy its name directly from the AMR graph.", "Anonymizing Dates For dates in AMR graphs, we use separate anonymization tokens for year, month-number, month-name, day-number and day-name, indicating whether the date is mentioned by word or by number.", "3 In AMR gener-US officials held an expert group meeting in January 2002 in New York.", "ation, we render the corresponding format when predicted.", "Figure 2(b) contains an example of all preprocessing up to this stage.", "Named Entity Clusters When performing AMR generation, each of the AMR fine-grained entity types is manually mapped to one of the four coarse entity types used in the Stanford NER system (Finkel et al., 2005) : person, location, organization and misc.", "This reduces the sparsity associated with many rarely occurring entity types.", "Figure 2 (c) contains an example with named entity clusters.", "NER for Parsing When parsing, we must normalize test sentences to match our anonymized training data.", "To produce fine-grained named entities, we run the Stanford NER system and first try to replace any identified span with a fine-grained category based on alignments observed during training.", "If this fails, we anonymize the sentence using the coarse categories predicted by the NER system, which are also categories in AMR.", "After parsing, we deterministically generate AMR for anonymizations using the corresponding text span.", "Linearization Linearization Order Our linearization order is defined by the order of nodes visited by depth first search, including backward traversing steps.", "For example, in Figure 2 , starting at meet the order contains meet, :ARG0, person, :ARG1-of, expert, :ARG2-of, group, :ARG2-of, :ARG1-of, :ARG0.", "4 The order traverses children in the sequence they are presented in the AMR.", "We consider alternative orderings of children in Section 7 but always follow the pattern demonstrated above.", "Rendering Function Our rendering function marks scope, and generates tokens following the pre-order traversal of the graph: (1) if the element is a node, it emits the type of the node.", "(2) if the element is an edge, it emits the type of the edge and then recursively emits a bracketed string for the (concept) node immediately after it.", "In case the node has only one child we omit the scope markers (denoted with left \"(\", and right \")\" parentheses), thus significantly reducing the number of generated tokens.", "Figure 2(d) contains an example showing all of the preprocessing techniques and scope markers that we use in our full model.", "Experimental Setup We conduct all experiments on the AMR corpus used in SemEval-2016 We validated word embedding sizes and RNN hidden representation sizes by maximizing AMR development set performance (Algorithm 1 -line 1).", "We searched over the set {128, 256, 500, 1024} for the best combinations of sizes and set both to 500.", "Models were trained by optimizing cross-entropy loss with stochastic gradient descent, using a batch size of 100 and dropout rate of 0.5.", "Across all models when performance does not improve on the AMR dev set, we decay the learning rate by 0.8.", "For the initial parser trained on the AMR corpus, (Algorithm 1 -line 1), we use a single stack version of our model, set initial learning rate to 0.5 and train for 60 epochs, taking the best performing model on the development set.", "All subsequent models benefited from increased depth and we used 2-layer stacked versions, maintaining the same embedding sizes.", "We set the initial Gigaword sample size to k = 200, 000 and executed a maximum of 3 iterations of self-training.", "For pretraining the parser and generator, (Algorithm 1lines 4 and 9), we used an initial learning rate of 1.0, and ran for 20 epochs.", "We attempt to fine-tune the parser and generator, respectively, after every epoch of pre-training, setting the initial learning rate to 0.1.", "We select the best performing model on the development set among all of these fine-tuning 5 We use the multi-BLEU script from the MOSES decoder suite (Koehn et al., 2007) .", "Corpus Examples OOV@1 (Song et al., 2016) 21.1 22.4 TREETOSTR (Flanigan et al., 2016) 23.0 23.0 attempts.", "During prediction we perform decoding using beam search and set the beam size to 5 both for parsing and generation.", "Results Parsing Results Table 1 summarizes our development results for different rounds of self-training and test results for our final system, self-trained on 200k, 2M and 20M unlabeled Gigaword sentences.", "Through every round of self-training, our parser improves.", "Our final parser outperforms comparable seq2seq and character LSTM models by over 10 points.", "While much of this improvement comes from self-training, our model without Gigaword data outperforms these approaches by 3.5 points on F1.", "We attribute this increase in performance to different handling of preprocessing and more careful hyper-parameter tuning.", "All other models that we compare against use semantic resources, such as WordNet, dependency parsers or CCG parsers (models marked with * were trained with less data, but only evaluate on newswire text; the rest evaluate on the full test set, containing text from blogs).", "Our full models outperform JAMR, a graph-based model but still lags behind other parser-dependent systems (CAMR 6 ), and resource heavy approaches (SBMT).", "Table 3 summarizes our AMR generation results on the development and test set.", "We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.", "Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points.", "7 Overall, our model incorporates less data than previous approaches as all reported methods train language models on the whole Gigaword corpus.", "We leave scaling our models to all of Gigaword for future work.", "Generation Results Sparsity Reduction Even after anonymization of open class vocabulary entries, we still encounter a great deal of sparsity in vocabulary given the small size of the AMR corpus, as shown in Table 2.", "By incorporating sentences from Gigaword we are able to reduce vocabulary sparsity dramatically, as we increase the size of sampled sentences: the out-of-vocabulary rate with a threshold of 5 reduces almost 5 times for GIGA-20M.", "Preprocessing Ablation Study We consider the contribution of each main component of our preprocessing stages while keeping our linearization order identical.", "Figure 2 contains examples for each setting of the ablations we evaluate on.", "First we evaluate using linearized graphs without paren-6 Since we are currently not using any Wikipedia resources for the prediction of named entities, we compare against the no-wikification version of the CAMR system.", "7 We also trained our generator on GIGA-2M and finetuned on LDC2014T12 in order to have a direct comparison with PBMT, and achieved a BLEU score of 29.7, i.e., 2.8 points of improvement.", "theses for indicating scope, Figure 2(c) , then without named entity clusters, Figure 2(b) , and additionally without any anonymization, Figure 2(a) .", "Tables 4 summarizes our evaluation on the AMR generation.", "Each components is required, and scope markers and anonymization contribute the most to overall performance.", "We suspect without scope markers our seq2seq models are not as effective at capturing long range semantic relationships between elements of the AMR graph.", "We also evaluated the contribution of anonymization to AMR parsing (Table 5 ).", "Following previous work, we find that seq2seq-based AMR parsing is largely ineffective without anonymization (Peng et al., 2017) .", "Linearization Evaluation In this section we evaluate three strategies for converting AMR graphs into sequences in the context of AMR generation and show that our models are largely agnostic to linearization orders.", "Our results argue, unlike SMT-based AMR generation methods (Pourdamghani et al., 2016) , that seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Linearization Orders All linearizations we consider use the pattern described in Section 4.2, but differ on the order in which children are visited.", "Each linearization generates anonymized, scope-marked output (see Section 4), of the form shown in Figure 2(d) .", "Human The proposal traverses children in the order presented by human authored AMR annotations exactly as shown in Figure 2(d) .", "Linearization Order BLEU HUMAN 21.7 GLOBAL-RANDOM 20.8 RANDOM 20.3 Table 6 : BLEU scores for AMR generation for different linearization orders (DEV set).", "Global-Random We construct a random global ordering of all edge types appearing in AMR graphs and re-use it for every example in the dataset.", "We traverse children based on the position in the global ordering of the edge leading to a child.", "Random For each example in the dataset we traverse children following a different random order of edge types.", "Results We present AMR generation results for the three proposed linearization orders in Table 6 .", "Random linearization order performs somewhat worse than traversing the graph according to Human linearization order.", "Surprisingly, a per example random linearization order performs nearly identically to a global random order, arguing seq2seq models can learn to ignore artifacts of the conversion of graphs to linear sequences.", "Human-authored AMR leaks information The small difference between random and globalrandom linearizations argues that our models are largely agnostic to variation in linearization order.", "On the other hand, the model that follows the human order performs better, which leads us to suspect it carries extra information not apparent in the graphical structure of the AMR.", "To further investigate, we compared the relative ordering of edge pairs under the same parent to the relative position of children nodes derived from those edges in a sentence, as reported by JAMR alignments.", "We found that the majority of pairs of AMR edges (57.6%) always occurred in the same relative order, therefore revealing no extra generation order information.", "8 Of the examples corresponding to edge pairs that showed variation, 70.3% appeared in an order consistent with the order they were realized in the sentence.", "The relative ordering of some pairs of AMR edges was particularly indicative of generation order.", "For example, the relative ordering of edges with types location and time, was 17% more indicative of the generation order than the majority of generated locations before time.", "9 To compare to previous work we still report results using human orderings.", "However, we note that any practical application requiring a system to generate an AMR representation with the intention to realize it later on, e.g., a dialog agent, will need to be trained either using consistent, or randomderived linearization orders.", "Arguably, our models are agnostic to this choice.", "8 Qualitative Results Figure 3 shows example outputs of our full system.", "The generated text for the first graph is nearly perfect with only a small grammatical error due to anonymization.", "The second example is more challenging, with a deep right-branching structure, and a coordination of the verbs stabilize and push in the subordinate clause headed by state.", "The model omits some information from the graph, namely the concepts terrorist and virus.", "In the third example there are greater parts of the graph that are missing, such as the whole sub-graph headed by expert.", "Also the model makes wrong attachment decisions in the last two sub-graphs (it is the evidence that is unimpeachable and irrefutable, and not the equipment), mostly due to insufficient annotation (thing) thus making their generation harder.", "Finally, Table 7 summarizes the proportions of error types we identified on 50 randomly selected examples from the development set.", "We found that the generator mostly suffers from coverage issues, an inability to mention all tokens in the input, followed by fluency mistakes, as illustrated above.", "Attachment errors are less frequent, which supports our claim that the model is robust to graph linearization, and can successfully encode long range dependency information between concepts.", "Conclusions We applied sequence-to-sequence models to the tasks of AMR parsing and AMR generation, by carefully preprocessing the graph representation and scaling our models via pretraining on millions of unlabeled sentences sourced from Gigaword corpus.", "Crucially, we avoid relying on resources such as knowledge bases and externally trained parsers.", "We achieve competitive results for the parsing task (SMATCH 62.1) and state-of-theart performance for generation (BLEU 33.8) .", "For future work, we would like to extend our work to different meaning representations such as the Minimal Recursion Semantics (MRS; Copestake et al.", "(2005) ).", "This formalism tackles certain linguistic phenomena differently from AMR (e.g., negation, and co-reference), contains explicit annotation on concepts for number, tense and case, and finally handles multiple languages 10 (Bender, 2014) .", "Taking a step further, we would like to apply our models on Semantics-Based Machine Translation using MRS as an intermediate representation between pairs of languages, and investigate the added benefit compared to directly translating the surface strings, especially in the case of distant language pairs such as English and Japanese (Siegel, 2000) .", "limit :arg0 ( treaty :arg0-of ( control :arg1 arms ) ) :arg1 ( number :arg1 ( weapon :mod conventional :arg1-of ( deploy :arg2 ( relative-pos :op1 loc_0 :dir west ) :arg1-of possible ) ) ) SYS: the arms control treaty limits the number of conventional weapons that can be deployed west of Ural Mountains .", "REF: the arms control treaty limits the number of conventional weapons that can be deployed west of the Ural Mountains .", "COMMENT: disfluency state :arg0 ( person :arg0-of ( have-org-role :arg1 ( committee :mod technical ) :arg3 ( expert :arg1 person :arg2 missile :mod loc_0 ) ) ) :arg1 ( evidence :arg0 equipment :arg1 ( plan :arg1 ( transfer :arg1 ( contrast :arg1 ( missile :mod ( just :polarity -) ) :arg2 ( capable :arg1 thing :arg2 ( make :arg1 missile ) ) ) ) ) :mod ( impeach :polarity -:arg1 thing ) :mod ( refute :polarity -:arg1 thing ) ) SYS: a technical committee expert on the technical committee stated that the equipment is not impeach , but it is not refutes .", "REF: a technical committee of Indian missile experts stated that the equipment was unimpeachable and irrefutable evidence of a plan to transfer not just missiles but missile-making capability.", "COMMENT: coverage , disfluency, attachment state :arg0 report :arg1 ( obligate :arg1 ( government-organization :arg0-of ( govern :arg1 loc_0 ) ) :arg2 ( help :arg1 ( and :op1 ( stabilize :arg1 ( state :mod weak ) ) :op2 ( push :arg1 ( regulate :mod international :arg0-of ( stop :arg1 terrorist :arg2 ( use :arg1 ( information :arg2-of ( available :arg3-of free )) :arg2 ( and :op1 ( create :arg1 ( form :domain ( warfare :mod biology :example ( version :arg1-of modify :poss other_1 ) ) :mod new ) ) :op2 ( unleash :arg1 form ) ) ) ) ) ) ) ) ) REF: the report stated British government must help to stabilize weak states and push for international regulations that would stop terrorists using freely available information to create and unleash new forms of biological warfare such as a modified version of the influenza virus .", "COMMENT: coverage , disfluency, attachment SYS: the report stated that the Britain government must help stabilize the weak states and push international regulations to stop the use of freely available information to create a form of new biological warfare such as the modified version of the influenza .", "Figure 3 : Linearized AMR after preprocessing, reference sentence, and output of the generator.", "We mark with colors common error types: disfluency, coverage (missing information from the input graph), and attachment (implying a semantic relation from the AMR between incorrect entities)." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "5", "6", "7", "7.1", "7.2", "9" ], "paper_header_content": [ "Introduction", "Related Work", "Methods", "Tasks", "Sequence-to-sequence Model", "Linearization", "Paired Training", "AMR Preprocessing", "Anonymization of Named Entities", "Linearization", "Experimental Setup", "Results", "Linearization Evaluation", "Linearization Orders", "Results", "Conclusions" ] }
GEM-SciDuet-train-57#paper-1106#slide-23
Attention
hold ARG0 person role US official ) ARG1 ( meet expert group ) ai = softmax X fi h(s), hi held an expert group meeting in January
hold ARG0 person role US official ) ARG1 ( meet expert group ) ai = softmax X fi h(s), hi held an expert group meeting in January
[]
GEM-SciDuet-train-58#paper-1115#slide-0
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-0
Questions and answers
0. Do current language models do equally well on all languages? No. 1. Which one do they struggle more with: German or English? German. 2. What about non-Indo-European languages, say Chinese? It depends. 3. What makes a language harder to model? Actually, rather technical factors. 4. Is Translationese easier? Its different, but not actually easier!
0. Do current language models do equally well on all languages? No. 1. Which one do they struggle more with: German or English? German. 2. What about non-Indo-European languages, say Chinese? It depends. 3. What makes a language harder to model? Actually, rather technical factors. 4. Is Translationese easier? Its different, but not actually easier!
[]
GEM-SciDuet-train-58#paper-1115#slide-1
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-1
How to measure difficulty
Language models measure surprisal/information content (NLL; log p()): en I love Florence! 5 bits Ich grue meine Oma und die Familie dahein. Alle mensen worden vrij en gelijk in waardigheid en rechten geboren. 11 bits Issue 1: Different topics/styles/content Issue 2: Comparing scores en Resumption of the session. de Wiederaufnahme der Sitzung. nl Hervatting van de sessie. Solution: train and test on translations! Europarl: 21 languages share ~40M chars Bibles: 62 languages share ~4M chars and this one takes a big ILP to solve, which is really fun Gurobi Why? Bibles: 69 languages 62 languages share ~4M chars 13 language fam ilies and this one takes Use total bits of an open-vocabulary model.
Language models measure surprisal/information content (NLL; log p()): en I love Florence! 5 bits Ich grue meine Oma und die Familie dahein. Alle mensen worden vrij en gelijk in waardigheid en rechten geboren. 11 bits Issue 1: Different topics/styles/content Issue 2: Comparing scores en Resumption of the session. de Wiederaufnahme der Sitzung. nl Hervatting van de sessie. Solution: train and test on translations! Europarl: 21 languages share ~40M chars Bibles: 62 languages share ~4M chars and this one takes a big ILP to solve, which is really fun Gurobi Why? Bibles: 69 languages 62 languages share ~4M chars 13 language fam ilies and this one takes Use total bits of an open-vocabulary model.
[]
GEM-SciDuet-train-58#paper-1115#slide-2
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-2
How to compare your language models across languages
We need to be open-vocabulary no UNKs. Every UNK is cheating morphologically rich languages have more UNKs, unfairly advantaging them. We cant normalize per word or even per character in languages individually. Example: if puccz and Putschde are equally likely, they should be equally difficult. just use overall bits (i.e., surprisal/NLL) of an aligned sentence [note: total easily obtainable from BPC or perplexity by multiplying with total chars/words]
We need to be open-vocabulary no UNKs. Every UNK is cheating morphologically rich languages have more UNKs, unfairly advantaging them. We cant normalize per word or even per character in languages individually. Example: if puccz and Putschde are equally likely, they should be equally difficult. just use overall bits (i.e., surprisal/NLL) of an aligned sentence [note: total easily obtainable from BPC or perplexity by multiplying with total chars/words]
[]
GEM-SciDuet-train-58#paper-1115#slide-3
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-3
How to aggregate multiple intents surprisals into difficulties
For fully parallel corpora... we can just sum everything up and compare that is fair. Wieder- aufnah- me der Although we were not al- Jetzt ist die Zeit y4,en y4,de y4,bg The peace that language aligned multi-text den dde dbg But what if theres missing data? Or we want robustness? LM surprisals/NLLs log-normal noise y2,en y2,de y2,bg n2 This is a probabilistic model language we can perform inference in! not qu ite, ou exp dde This is a probabilistic model E R O n2 al m od el is we E can D A perform inference in! i j i n exl
For fully parallel corpora... we can just sum everything up and compare that is fair. Wieder- aufnah- me der Although we were not al- Jetzt ist die Zeit y4,en y4,de y4,bg The peace that language aligned multi-text den dde dbg But what if theres missing data? Or we want robustness? LM surprisals/NLLs log-normal noise y2,en y2,de y2,bg n2 This is a probabilistic model language we can perform inference in! not qu ite, ou exp dde This is a probabilistic model E R O n2 al m od el is we E can D A perform inference in! i j i n exl
[]
GEM-SciDuet-train-58#paper-1115#slide-4
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-4
Good open vocabulary language models
Formerly state-of-the-art-ish AWD-LSTM (Merity et al., 2018) language models: RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell char-RNNLM: t h e c a t c h a s e d
Formerly state-of-the-art-ish AWD-LSTM (Merity et al., 2018) language models: RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell char-RNNLM: t h e c a t c h a s e d
[]
GEM-SciDuet-train-58#paper-1115#slide-5
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-5
Good open vocabulary language models Mielke and Eisner 2019
Formerly state-of-the-art-ish AWD-LSTM (Merity et al., 2018) language models: RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell char-RNNLM: t h e c a t c h a s e d RNN cell RNN cell R NN cell RNN cell RNN cell BPE-RNNLM, few merges: the ca@@ t cha@@ sed
Formerly state-of-the-art-ish AWD-LSTM (Merity et al., 2018) language models: RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell RNN cell char-RNNLM: t h e c a t c h a s e d RNN cell RNN cell R NN cell RNN cell RNN cell BPE-RNNLM, few merges: the ca@@ t cha@@ sed
[]
GEM-SciDuet-train-58#paper-1115#slide-6
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-6
Choosing the number of BPE merges how many is best
It depends on the language (total surprisal, given merges as a ratio of the vocabulary): hu ro nl p el hu de hu hu hu sv es pt cs f i hu de hu hu hu de de de de hu de de de cs p el cs pl pl pl nl pl pt pl cs pl pl cs cs pt cs el cs pl cs et et et ro nl ro cs et es it ro nl et nl et et it pt nl el el t el el el et ro el n el es es pt it pt f i nl en es pt pt fi ro fi nl ro pt fi nl sk es it i f es es pt fr fr it ro es es bg i f sl en lt ro lt ro lt lt f i it sk it lt sk lt sk lt lt it it sk sl fi sk sk sl lt i f en sk sk sl sl sl sl bg fr fr sl lv sl fr bg lv fr lv fr lv fr fr da sv lv lv bg lv bg bg bg en bg bg en lv en lv en en en da sv lv lv bg lv fr lv bg bg bg en bg bg en lv en lv en en en da sv sv sv sv sv da sv sv sv da da da da da da da sv lv lv bg lv bg bg bg en bg lv fr bg en lv en lv en en en is this one going to be fine? i f de de pt cs el cs pl et et et et ro es it nl ro cs ro nl et nl et nl el el el et it el t el et ro pt el n el es pt it pt i f nl fi ro fi nl ro pt fi nl en es es pt es pt sk fr fr it es it i f es pt es bg f i sl en lt ro lt ro lt ro es i f it sk it lt sk lt lt it sk lt it sk sl lt fi sk sk sl it doesnt matter that much. et es pt nl ro cs f i bg it el da lt sv sk en sl
It depends on the language (total surprisal, given merges as a ratio of the vocabulary): hu ro nl p el hu de hu hu hu sv es pt cs f i hu de hu hu hu de de de de hu de de de cs p el cs pl pl pl nl pl pt pl cs pl pl cs cs pt cs el cs pl cs et et et ro nl ro cs et es it ro nl et nl et et it pt nl el el t el el el et ro el n el es es pt it pt f i nl en es pt pt fi ro fi nl ro pt fi nl sk es it i f es es pt fr fr it ro es es bg i f sl en lt ro lt ro lt lt f i it sk it lt sk lt sk lt lt it it sk sl fi sk sk sl lt i f en sk sk sl sl sl sl bg fr fr sl lv sl fr bg lv fr lv fr lv fr fr da sv lv lv bg lv bg bg bg en bg bg en lv en lv en en en da sv lv lv bg lv fr lv bg bg bg en bg bg en lv en lv en en en da sv sv sv sv sv da sv sv sv da da da da da da da sv lv lv bg lv bg bg bg en bg lv fr bg en lv en lv en en en is this one going to be fine? i f de de pt cs el cs pl et et et et ro es it nl ro cs ro nl et nl et nl el el el et it el t el et ro pt el n el es pt it pt i f nl fi ro fi nl ro pt fi nl en es es pt es pt sk fr fr it es it i f es pt es bg f i sl en lt ro lt ro lt ro es i f it sk it lt sk lt lt it sk lt it sk sl lt fi sk sk sl it doesnt matter that much. et es pt nl ro cs f i bg it el da lt sv sk en sl
[]
GEM-SciDuet-train-58#paper-1115#slide-7
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-7
Difficulties for char BPE RNNLM 21 Europarl languages
difficulty (100) using char-RNNLM difficulty (100) using BPE-RNNLM with 0.4|V| merges et es pt nl cs ro f i bg it el da lt sv sk en sl easier with chars
difficulty (100) using char-RNNLM difficulty (100) using BPE-RNNLM with 0.4|V| merges et es pt nl cs ro f i bg it el da lt sv sk en sl easier with chars
[]
GEM-SciDuet-train-58#paper-1115#slide-8
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-8
Difficulties for char BPE RNNLM 21 Europarl languages and 106 Bibles
difficulty (100) using char-RNNLM easier with BPE easier with BPE de de de de de et es de de de pt en nl en de i cs ro f bg it el en da en en lt sv sk en sl easier with chars easier with chars eng bba bqc arb lv cmn lit difficulty (100) using BPE-RNNLM with 0.4|V| merges difficulty (100) using BPE-RNNLM with 0.4|V| merges fr pt nl ro cs el it f i bg poh deu deu deu deu cak ceb ayr fr hu in f ita ceb kjb ceb ita fra tgl wbm deu som ces cac deu et cnh deu por ita ita vie deu deu es gur ukurkr pt eng vie bul f in hun ayr nl nld dan por ell bul fra ron afr ind ben por fra dan nld ell ita no fra fra ces hhurn v quy vie mya xho tpi eng deu cym mmamri fra ind ben zom fra ron fra fra por aln plt f in f in cs ro f i mah kek quh hat tlh rus bg hat eng qub it el fra lat bul wal quz da kek eng epo fra nor lt arz fra sv sk tpm en tbz lit sl easier with chars easier with chars eng
difficulty (100) using char-RNNLM easier with BPE easier with BPE de de de de de et es de de de pt en nl en de i cs ro f bg it el en da en en lt sv sk en sl easier with chars easier with chars eng bba bqc arb lv cmn lit difficulty (100) using BPE-RNNLM with 0.4|V| merges difficulty (100) using BPE-RNNLM with 0.4|V| merges fr pt nl ro cs el it f i bg poh deu deu deu deu cak ceb ayr fr hu in f ita ceb kjb ceb ita fra tgl wbm deu som ces cac deu et cnh deu por ita ita vie deu deu es gur ukurkr pt eng vie bul f in hun ayr nl nld dan por ell bul fra ron afr ind ben por fra dan nld ell ita no fra fra ces hhurn v quy vie mya xho tpi eng deu cym mmamri fra ind ben zom fra ron fra fra por aln plt f in f in cs ro f i mah kek quh hat tlh rus bg hat eng qub it el fra lat bul wal quz da kek eng epo fra nor lt arz fra sv sk tpm en tbz lit sl easier with chars easier with chars eng
[]
GEM-SciDuet-train-58#paper-1115#slide-9
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-9
How about morphological counting complexity Sagot 2013
pl el ro pt es et de hu cs pl nl fr hu it nl es pt et fr en ro bg cs f i sk sl bg el it lt f i da en lt sv sk sl lv da sv lv en da nl fr sv de sk el it ro es bg sl et pl lt cs f i 5 10 an 5 0 outlier 0 en da nl fr sv de sk es pt lv hu lt cs i f
pl el ro pt es et de hu cs pl nl fr hu it nl es pt et fr en ro bg cs f i sk sl bg el it lt f i da en lt sv sk sl lv da sv lv en da nl fr sv de sk el it ro es bg sl et pl lt cs f i 5 10 an 5 0 outlier 0 en da nl fr sv de sk es pt lv hu lt cs i f
[]
GEM-SciDuet-train-58#paper-1115#slide-10
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-10
Other linguistically motivated regressors
WALS: Prefixing vs. Suffixing Morphology (for languages where present)? WALS: Order of Subject, Object and Verb (for languages where present)? Head-POS Entropy (Dehouck and Denis, 2018)? ...neither mean and skew show correlation. Average dependency length (computed using UDPipe (Straka et al., 2016))? ...correlation! But not significant after correcting for multiple hypotheses.
WALS: Prefixing vs. Suffixing Morphology (for languages where present)? WALS: Order of Subject, Object and Verb (for languages where present)? Head-POS Entropy (Dehouck and Denis, 2018)? ...neither mean and skew show correlation. Average dependency length (computed using UDPipe (Straka et al., 2016))? ...correlation! But not significant after correcting for multiple hypotheses.
[]
GEM-SciDuet-train-58#paper-1115#slide-11
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-11
Very simple heuristics are very predictive
Raw sequence length # predictions char-RNNLM difficulty i.e., for the char-RNNLM puccz is easier than Putschde! i.e., the BPE-RNNLM still suffers if a language has high type-token-ratio! Significant on: not Europarl but Bibles at p Wow! What is happening here? We have many conjectures...
Raw sequence length # predictions char-RNNLM difficulty i.e., for the char-RNNLM puccz is easier than Putschde! i.e., the BPE-RNNLM still suffers if a language has high type-token-ratio! Significant on: not Europarl but Bibles at p Wow! What is happening here? We have many conjectures...
[]
GEM-SciDuet-train-58#paper-1115#slide-12
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-12
Translationese translations as a separate language
Common assumption: Translationese is somehow simpler than native text. We have partial parallel data that we can use to evaluate our models: enoriginal entranslated deoriginal detranslated nloriginal nltranslated The German... Der deutsche... De Duitse... Thank you... Vielen Dank... Hartelijk... ...and indeed the original languages seem harder. But we missed something!
Common assumption: Translationese is somehow simpler than native text. We have partial parallel data that we can use to evaluate our models: enoriginal entranslated deoriginal detranslated nloriginal nltranslated The German... Der deutsche... De Duitse... Thank you... Vielen Dank... Hartelijk... ...and indeed the original languages seem harder. But we missed something!
[]
GEM-SciDuet-train-58#paper-1115#slide-13
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-13
We trained on mostly translationese
en fr de es nl it pt sv el fi pl da ro hu sk cs sl lt bg et lv languages, sorted by absolute # native sentences Of course we will then find it easier...
en fr de es nl it pt sv el fi pl da ro hu sk cs sl lt bg et lv languages, sorted by absolute # native sentences Of course we will then find it easier...
[]
GEM-SciDuet-train-58#paper-1115#slide-14
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-14
Repeat the experiment with fairly balancing training data
Change the training sets! We can rebalance a single language, leaving the others merged, i.e.: enoriginal entranslated de nl The German... Der deutsche... De Duitse... Thank you... Vielen Dank... Hartelijk... And the result: the difficulties are now the same! (more precisely, native is 0.004 easier)
Change the training sets! We can rebalance a single language, leaving the others merged, i.e.: enoriginal entranslated de nl The German... Der deutsche... De Duitse... Thank you... Vielen Dank... Hartelijk... And the result: the difficulties are now the same! (more precisely, native is 0.004 easier)
[]
GEM-SciDuet-train-58#paper-1115#slide-15
1115
What Kind of Language Is Hard to Language-Model?
How language-agnostic are current state-ofthe-art NLP tools? Are there some types of language that are easier to model with current methods? In prior work (Cotterell et al., 2018) we attempted to address this question for language modeling, and observed that recurrent neural network language models do not perform equally well over all the highresource European languages found in the Europarl corpus. We speculated that inflectional morphology may be the primary culprit for the discrepancy. In this paper, we extend these earlier experiments to cover 69 languages from 13 language families using a multilingual Bible corpus. Methodologically, we introduce a new paired-sample multiplicative mixed-effects model to obtain language difficulty coefficients from at-least-pairwise parallel corpora. In other words, the model is aware of inter-sentence variation and can handle missing data. Exploiting this model, we show that "translationese" is not any easier to model than natively written language in a fair comparison. Trying to answer the question of what features difficult languages have in common, we try and fail to reproduce our earlier (Cotterell et al., 2018) observation about morphological complexity and instead reveal far simpler statistics of the data that seem to drive complexity in a much larger sample.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289 ], "paper_content_text": [ "Introduction Do current NLP tools serve all languages?", "Technically, yes, as there are rarely hard constraints that prohibit application to specific languages, as long as there is data annotated for the task.", "However, in practice, the answer is more nuanced: as most studies seem to (unfairly) assume English is representative of the world's languages (Bender, 2009 ), we do not have a clear idea how well models perform cross-linguistically in a controlled setting.", "In this work, we look at current methods for language modeling and attempt to determine whether there are typological properties that make certain languages harder to language-model than others.", "One of the oldest tasks in NLP (Shannon, 1951 ) is language modeling, which attempts to estimate a distribution p(x) over strings x of a language.", "Recent years have seen impressive improvements with recurrent neural language models (e.g., Merity et al., 2018) .", "Language modeling is an important component of tasks such as speech recognition, machine translation, and text normalization.", "It has also enabled the construction of contextual word embeddings that provide impressive performance gains in many other NLP tasks (Peters et al., 2018 )-though those downstream evaluations, too, have focused on a small number of (mostly English) datasets.", "In prior work (Cotterell et al., 2018) , we compared languages in terms of the difficulty of language modeling, controlling for differences in content by using a multi-lingual, fully parallel text corpus.", "Few such corpora exist: in that paper, we made use of the Europarl corpus which, unfortunately, is not very typologically diverse.", "Using a corpus with relatively few (and often related) languages limits the kinds of conclusions that can be drawn from any resulting comparisons.", "In this paper, we present an alternative method that does not require the corpus to be fully parallel, so that collections consisting of many more languages can be compared.", "Empirically, we report language-modeling results on 62 languages from 13 language families using Bible translations, and on the 21 languages used in the European Parliament proceedings.", "We suppose that a language model's surprisal on a sentence-the negated log of the probability it assigns to the sentence-reflects not only the length and complexity of the specific sentence, but also the general difficulty that the model has in predicting sentences of that language.", "Given language models of diverse languages, we jointly recover each language's difficulty parameter.", "Our regression formula explains the variance in the dataset better than previous approaches and can also deal with missing translations for some purposes.", "Given these difficulty estimates, we conduct a correlational study, asking which typological features of a language are predictive of modeling difficulty.", "Our results suggest that simple properties of a language-the word inventory and (to a lesser extent) the raw character sequence length-are statistically significant indicators of modeling difficulty within our large set of languages.", "In contrast, we fail to reproduce our earlier results from Cotterell et al.", "(2018) , 1 which suggested morphological complexity as an indicator of modeling complexity.", "In fact, we find no tenable correlation to a wide variety of typological features, taken from the WALS dataset and other sources.", "Additionally, exploiting our model's ability to handle missing data, we directly test the hypothesis that translationese leads to easier language-modeling (Baker, 1993; Lembersky et al., 2012) .", "We ultimately cast doubt on this claim, showing that, under the strictest controls, translationese is different, but not any easier to model according to our notion of difficulty.", "We conclude with a recommendation: The world 1 We can certainly replicate those results in the sense that, using the surprisals from those experiments, we achieve the same correlations.", "However, we did not reproduce the results under new conditions (Drummond, 2009 ).", "Our new conditions included a larger set of languages, a more sophisticated difficulty estimation method, and-perhaps crucially-improved language modeling families that tend to achieve better surprisals (or equivalently, better perplexity).", "being small, typology is in practice a small-data problem.", "there is a real danger that cross-linguistic studies will under-sample and thus over-extrapolate.", "We outline directions for future, more robust, investigations, and further caution that future work of this sort should focus on datasets with far more languages, something our new methods now allow.", "The Surprisal of a Sentence When trying to estimate the difficulty (or complexity) of a language, we face a problem: the predictiveness of a language model on a domain of text will reflect not only the language that the text is written in, but also the topic, meaning, style, and information density of the text.", "To measure the effect due only to the language, we would like to compare on datasets that are matched for the other variables, to the extent possible.", "The datasets should all contain the same content, the only difference being the language in which it is expressed.", "Multitext for a Fair Comparison To attempt a fair comparison, we make use of multitext-sentence-aligned 2 translations of the same content in multiple languages.", "Different surprisals on the translations of the same sentence reflect quality differences in the language models, unless the translators added or removed information.", "3 In what follows, we will distinguish between the i th sentence in language j, which is a specific string s i j , and the i th intent, the shared abstract thought that gave rise to all the sentences s i1 , s i2 , .", ".", ".. For simplicity, suppose for now that we have a fully parallel corpus.", "We select, say, 80% of the intents.", "4 We use the English sentences that express these intents to train an English language model, and test it on the sentences that express the remaining 20% of the intents.", "We will later drop the assumption of a fully parallel corpus ( §3), which will help us to estimate the effects of translationese ( §6).", "Comparing Surprisal Across Languages Given some test sentence s i j , a language model p defines its surprisal: the negative log-likelihood NLL(s i j ) = − log 2 p(s i j ).", "This can be interpreted as the number of bits needed to represent the sentence under a compression scheme that is derived from the language model, with high-probability sentences requiring the fewest bits.", "Long or unusual sentences tend to have high surprisal-but high surprisal can also reflect a language's model's failure to anticipate predictable words.", "In fact, language models for the same language are often comparatively evaluated by their average surprisal on a corpus (the cross-entropy).", "Cotterell et al.", "(2018) similarly compared language models for different languages, using a multitext corpus.", "Concretely, recall that s i j and s i j should contain, at least in principle, the same information for two languages j and j -they are translations of each other.", "But, if we find that NLL(s i j ) > NLL(s i j ), we must assume that either s i j contains more information than s i j , or that our language model was simply able to predict it less well.", "5 If we were to assume that our language models were perfect in the sense that they captured the true probability distribution of a language, we could make the former claim; but we suspect that much of the difference can be explained by our imperfect LMs rather than inherent differences in the expressed information (see the discussion in footnote 3).", "Our Language Models Specifically, the crude tools we use are recurrent neural network language models (RNNLMs) over different types of subword units.", "For fairness, it is of utmost importance that these language models are open-vocabulary, i.e., they predict the entire string and cannot cheat by predicting only UNK (\"unknown\") for some words of the language.", "6 Char-RNNLM The first open-vocabulary RNNLM is the one of Sutskever et al.", "(2011) , whose model generates a sentence, not word by 5 The former might be the result of overt marking of, say, evidentiality or gender, which adds information.", "We hope that these differences are taken care of by diligent translators producing faithful translations in our multitext corpus.", "6 We restrict the set of characters to those that we see at least 25 times in the training set, replacing all others with a new symbol , as is common and easily defensible in openvocabulary language modeling .", "We make an exception for Chinese, where we only require each character to appear at least twice.", "These thresholds result in negligible \"out-of-alphabet\" rates for all languages.", "word, but rather character by character.", "An obvious drawback of the model is that it has no explicit representation of reusable substrings , but the fact that it does not rely on a somewhat arbitrary word segmentation or tokenization makes it attractive for this study.", "We use a more current version based on LSTMs (Hochreiter and Schmidhuber, 1997) , using the implementation of Merity et al.", "(2018) with the char-PTB parameters.", "BPE-RNNLM BPE-based open-vocabulary language models make use of sub-word units instead of either words or characters and are a strong baseline on multiple languages .", "Before training the RNN, byte pair encoding (BPE; Sennrich et al., 2016) is applied globally to the training corpus, splitting each word (i.e., each space-separated substring) into one or more units.", "The RNN is then trained over the sequence of units, which looks like this: \"The |ex|os|kel|eton |is |gener|ally |blue\".", "The set of subword units is finite and determined from training data only, but it is a superset of the alphabet, making it possible to explain any novel word in held-out data via some segmentation.", "7 One important thing to note is that the size of this set can be tuned by specifying the number of BPE merges, allowing us to smoothly vary between a word-level model (∞ merges) and a kind of character-level model (0 merges).", "As Figure 2 shows, the number of merges that maximizes log-likelihood of our dev set differs from language to language.", "8 However, as we will see in Figure 3 , tuning this parameter does not substantially influence our results.", "We therefore will refer to the model with 0.4|V | merges as BPE-RNNLM.", "3 Aggregating Sentence Surprisals Cotterell et al.", "(2018) evaluated the model for language j simply by its total surprisal i NLL(s i j ).", "This comparative measure required a complete multitext corpus containing every sentence s i j (the expression of the intent i in language j).", "We relax this requirement by using a fully probabilistic regression model that can deal with missing data ( Figure 1 ).", "9 Our model predicts each sentence's surprisal y i j = NLL(s i j ) using an intent-specific \"information content\" factor n i , which captures the inherent surprisal of the intent, combined with a language-specific difficulty factor d j .", "This represents a better approach to varying sentence lengths and lets us work with missing translations in the test data (though it does not remedy our need for fully parallel language model training data).", "Model 1: Multiplicative Mixed-effects Model 1 is a multiplicative mixed-effects model: y i j = n i · exp(d j ) · exp( i j ) (1) i j ∼ N (0, σ 2 ) (2) This says that each intent i has a latent size of n imeasured in some abstract \"informational units\"that is observed indirectly in the various sentences s i j that express the intent.", "Larger n i tend to yield longer sentences.", "Sentence s i j has y i j bits of surprisal; thus the multiplier y i j/n i represents the number of bits that language j used to express each informational unit of intent i, under our language model of language j.", "Our mixedeffects model assumes that this multiplier is lognormally distributed over the sentences i: that is, log( y i j/n i ) ∼ N (d j , σ 2 ), where mean d j is the dif- ficulty of language j.", "That is, y i j/n i = exp(d j + i j ) where i j ∼ N (0, σ 2 ) is residual noise, yielding equations (1)-(2).", "10 We jointly fit the intent sizes n i and the language difficulties d j .", "9 Specifically, we deal with data missing completely at random (MCAR), a strong assumption on the data generation process.", "More discussion on this can be found in Appendix A.", "10 It is tempting to give each language its own σ 2 j parameter, but then the MAP estimate is pathological, since infinite likelihood can be attained by setting one language's σ 2 j to 0.", "Model 2: Heteroscedasticity Because it is multiplicative, Model 1 appropriately predicts that in each language j, intents with large n i will not only have larger y i j values but these values will vary more widely.", "However, Model 1 is homoscedastic: the variance σ 2 of log( y i j/n i ) is assumed to be independent of the independent variable n i , which predicts that the distribution of y i j should spread out linearly as the information content n i increases: e.g., p(y i j ≥ 13 | n i = 10) = p(y i j ≥ 26 | n i = 20).", "That assumption is questionable, since for a longer sentence, we would expect log y i j/n i to come closer to its mean d j as the random effects of individual translational choices average out.", "11 We address this issue by assuming that y i j results from n i ∈ N independent choices: y i j = exp(d j ) · n i k=1 exp i jk (3) i jk ∼ N (0, σ 2 ) (4) The number of bits for the k th informational unit now varies by a factor of exp i jk that is log-normal and independent of the other units.", "It is common to approximate the sum of independent log-normals by another log-normal distribution, matching mean and variance (Fenton-Wilkinson approximation; Fenton, 1960) , 12 yielding Model 2: y i j = n i · exp(d j ) · exp( i j ) (1) σ 2 i = ln 1 + exp(σ 2 )−1 n i (5) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (6) in which the noise term i j now depends on n i .", "Unlike (4), this formula no longer requires n i ∈ N; we allow any n i ∈ R >0 , which will also let us use gradient descent in estimating n i .", "In effect, fitting the model chooses each n i so that the resulting intent-specific but languageindependent distribution of n i · exp( i j ) values, 13 11 Similarly, flipping a fair coin 10 times results in 5 ± 1.58 heads where 1.58 represents the standard deviation, but flipping it 20 times does not result in 10 ± 1.58 · 2 heads but rather 10 ± 1.58 · √ 2 heads.", "Thus, with more flips, the ratio heads/flips tends to fall closer to its mean 0.5.", "12 There are better approximations, but even the only slightly more complicated Schwartz-Yeh approximation (Schwartz and Yeh, 1982) already requires costly and complicated approximations in addition to lacking the generalizability to nonintegral n i values that we will obtain for the Fenton-Wilkinson approximation.", "13 The distribution of i j is the same for every j.", "It no longer has mean 0, but it depends only on n i .", "after it is scaled by exp(d j ) for each language j, will assign high probability to the observed y i j .", "Notice that in Model 2, the scale of n i becomes meaningful: fitting the model will choose the size of the abstract informational units so as to predict how rapidly σ i falls off with n i .", "This contrasts with Model 1, where doubling all the n i values could be compensated for by halving all the exp(d j ) values.", "Model 2L: An Outlier-Resistant Variant One way to make Model 2 more outlier-resistant is to use a Laplace distribution 14 instead of a Gaussian in (6) as an approximation to the distribution of i j .", "The Laplace distribution is heavy-tailed, so it is more tolerant of large residuals.", "We choose its mean and variance just as in (6) .", "This heavy-tailed i j distribution can be viewed as approximating a version of Model 2 in which the i jk themselves follow some heavy-tailed distribution.", "Estimating model parameters We fit each regression model's parameters by L-BFGS.", "We then evaluate the model's fitness by measuring its held-out data likelihood-that is, the probability it assigns to the y i j values for held-out intents i.", "Here we use the previously fitted d j and σ parameters, but we must newly fit n i values for the new i using MAP estimates or posterior means.", "A full comparison of our models under various conditions can be found in Appendix C. The primary findings are as follows.", "On Europarl data (which has fewer languages), Model 2 performs best.", "On the Bible corpora, all models are relatively close to one another, though the robust Model 2L gets more consistent results than Model 2 across data subsets.", "We use MAP estimates under Model 2 for all remaining experiments for speed and simplicity.", "15 A Note on Bayesian Inference As our model of y i j values is fully generative, one could place priors on our parameters and do full inference of the posterior rather than performing MAP inference.", "We did experiment with priors but found them so quickly overruled by the data that it did not make much sense to spend time on them.", "Specifically, for full inference, we implemented all models in STAN (Carpenter et al., 2017) , a 14 One could also use a Cauchy distribution instead of the Laplace distribution to get even heavier tails, but we saw little difference between the two in practice.", "15 Further enhancements are possible: we discuss our \"Model 3\" in Appendix B, but it did not seem to fit better.", "toolkit for fast, state-of-the-art inference using Hamiltonian Monte Carlo (HMC) estimation.", "Running HMC unfortunately scales sublinearly with the number of sentences (and thus results in very long sampling times), and the posteriors we obtained were unimodal with relatively small variances (see also Appendix C).", "We therefore work with the MAP estimates in the rest of this paper.", "The Difficulties of 69 languages Having outlined our method for estimating language difficulty scores d j , we now seek data to do so for all our languages.", "If we wanted to cover the most languages possible with parallel text, we should surely look at the Universal Declaration of Human Rights, which has been translated into over 500 languages.", "Yet this short document is far too small to train state-of-the-art language models.", "In this paper, we will therefore follow previous work in using the Europarl corpus (Koehn, 2005) , but also for the first time make use of 106 Bibles from Mayer and Cysouw (2014)'s corpus.", "Although our regression models of the surprisals y i j can be estimated from incomplete multitext, the surprisals themselves are derived from the language models we are comparing.", "To ensure that the language models are comparable, we want to train them on completely parallel data in the various languages.", "For this, we seek complete multitext.", "Europarl: 21 Languages The Europarl corpus (Koehn, 2005) contains decades worth of discussions of the European Parliament, where each intent appears in up to 21 languages.", "It was previously used by Cotterell et al.", "(2018) for its size and stability.", "In §6, we will also exploit the fact that each intent's original language is known.", "To simplify our access to this information, we will use the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) .", "From it, we extract the intents that appear in all 21 languages, as enumerated in footnote 8.", "The full extraction process and corpus statistics are detailed in Appendix D. The Bible: 62 Languages The Bible is a religious text that has been used for decades as a dataset for massively multilingual NLP (Resnik et al., 1999; Yarowsky et al., 2001; Agić et al., 2016) tokenized 16 and aligned collection assembled by Mayer and Cysouw (2014) .", "We use the smallest annotated subdivision (a single verse) as a sentence in our difficulty estimation model; see footnote 2.", "Some of the Bibles in the dataset are incomplete.", "As the Bibles include different sets of verses (intents), we have to select a set of Bibles that overlap strongly, so we can use the verses shared by all these Bibles to comparably train all our language models (and fairly test them: see Appendix A).", "We cast this selection problem as an integer linear program (ILP), which we solve exactly in a few hours using the Gurobi solver (more details on this selection in Appendix E).", "This optimal solution keeps 25996 verses, each of which appears across 106 Bibles in 62 languages, 17 spanning 13 language families.", "18 We allow j to range over the 106 Bibles, so when a language has multiple Bibles, we estimate a separate difficulty d j for each one.", "Results The estimated difficulties are visualized in Figure 4 .", "We can see that general trends are preserved between datasets: German and Hungarian are hardest, English and Lithuanian easiest.", "As we can see in Figure 3 for Europarl, the difficulty estimates are 16 The fact that the resource is tokenized is (yet) another possible confound for this study: we are not comparing performance on languages, but on languages/Bibles with some specific translator and tokenization.", "It is possible that our y i j values for each language j depend to a small degree on the tokenizer that was chosen for that language.", "17 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 18 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran.", "For each language, we are reporting here the first family listed by Ethnologue (Paul et al., 2009) , manually fixing tlh → Constructed language.", "It is unfortunate not to have more families or more languages per family.", "A broader sample could be obtained by taking only the New Testament-but unfortunately that has < 8000 verses, a meager third of our dataset that is already smaller that the usually considered tiny PTB dataset (see details in Appendix E).", "hardly affected when tuning the number of BPE merges per-language instead of globally, validating our approach of using the BPE model for our experiments.", "A bigger difference seems to be the choice of char-RNNLM vs. BPE-RNNLM, which changes the ranking of languages both on Europarl data and on Bibles.", "We still see German as the hardest language, but almost all other languages switch places.", "Specifically, we can see that the variance of the char-RNNLM is much higher.", "Are All Translations the Same?", "Texts like the Bible are justly infamous for their sometimes archaic or unrepresentative use of language.", "The fact that we sometimes have multiple Bible translations in the same language lets us observe variation by translation style.", "The sample standard deviation of d j among the 106 Bibles j is 0.076/0.063 for BPE/char-RNNLM.", "Within the 11 German, 11 French, and 4 English Bibles, the sample standard deviations were roughly 0.05/0.04, 0.05/0.04, and 0.02/0.04 respectively: so style accounts for less than half the variance.", "We also consider another parallel corpus, created from the NIST OpenMT competitions on machine translation, in which each sentence has 4 English translations (NIST Multimodal Information Group, 2010a ,b,c,d,e,f,g, 2013b .", "We get a sample standard deviation of 0.01/0.03 among the 4 resulting English corpora, suggesting that language difficulty estimates (particularly the BPE estimate) depend less on the translator, to the extent that these corpora represent individual translators.", "What Correlates with Difficulty?", "Making use of our results on these languages, we can now answer the question: what features of a language correlate with the difference in language complexity?", "Sadly, we cannot conduct all analyses on all data: the Europarl languages are well-served by existing tools like UDPipe (Straka et al., 2016) , but the languages of our Bibles are often not.", "We therefore conduct analyses that rely on automatically extracted features only on the Europarl corpora.", "Note that to ensure a false discovery rate of at most α = .05, all reported p-values have to be corrected using Benjamini and Hochberg (1995) 's procedure: only p ≤ .05· 5 /28 ≈ 0.009 is significant.", "Highlighted on the right are deu and fra, for which we have many Bibles, and eng, which has often been prioritized even over these two in research.", "In the middle we see the difficulties of the 14 languages that are shared between the Bibles and Europarl aligned to each other (averaging all estimates), indicating that the general trends we see are not tied to either corpus.", "choose among forms like \"talk,\" \"talks,\" \"talking\") was mainly responsible for difficulty in modeling.", "They found a language's Morphological Counting Complexity (Sagot, 2013) to correlate positively with its difficulty.", "We use the reported MCC values from that paper for our 21 Europarl languages, but to our surprise, find no statistically significant correlation with the newly estimated difficulties of our new language models.", "Comparing the scatterplot for both languages in Figure 5 Figure 1 , we see that the high-MCC outlier Finnish has become much easier in our (presumably) better-tuned models.", "We suspect that the reported correlation in that paper was mainly driven by such outliers and conclude that MCC is not a good predictor of modeling difficulty.", "Perhaps finer measures of morphological complexity would be more predictive.", "(2018) propose an alternative measure of morphosyntactic complexity.", "Given a corpus of dependency graphs, they estimate the conditional entropy of the POS tag of a random token's parent, conditioned on the token's type.", "In a language where this HPE-mean metric is low, most tokens can predict the POS of their parent even without context.", "We compute HPE-mean from dependency parses of the Europarl data, generated using UDPipe 1.2.0 (Straka et al., 2016) and freely-available tokenization, tagging, parsing models trained on the Universal Dependencies 2.0 treebanks (Straka and Strakov, 2017) .", "HPE-mean may be regarded as the mean over all corpus tokens of Head POS Entropy (Dehouck and Denis, 2018) , which is the entropy of the POS tag of a token's parent given that particular token's type.", "We also compute HPE-skew, the (positive) skewness of the empirical distribution of HPE on the corpus tokens.", "We remark that in each language, HPE is 0 for most tokens.", "Morphological Counting Complexity Head-POS Entropy Dehouck and Denis As predictors of language difficulty, HPE-mean has a Spearman's ρ = .004/−.045 (p > .9/.8) and HPE-skew has a Spearman's ρ = .032/.158 (p > .8/.4), so this is not a positive result.", "Average dependency length It has been observed that languages tend to minimize the distance between heads and dependents (Liu, 2008) .", "Speakers prefer shorter dependencies in both production and processing, and average dependency lengths tend to be much shorter than would be expected from randomly-generated parses (Futrell et al., 2015; Liu et al., 2017) .", "On the other hand, there is substantial variability between languages, and it has been proposed, for example, that head-final languages and case-marking languages tend to have longer dependencies on average.", "Do language models find short dependencies easier?", "We find that average dependency lengths estimated from automated parses are very closely correlated with those estimated from (held-out) manual parse trees.", "We again use the automaticallyparsed Europarl data and compute dependency lengths using the Futrell et al.", "(2015) procedure, which excludes punctuation and standardizes several other grammatical relationships (e.g., objects of prepositions are made to depend on their prepositions, and verbs to depend on their complementizers).", "Our hypothesis that scrambling makes language harder to model seems confirmed at first: while the non-parametric (and thus more weakly powered) Spearman's ρ = .196/.092 (p = .394/.691), Pearson's r = .486/.522 (p = .032/.015).", "However, after correcting for multiple comparisons, this is also non-significant.", "19 WALS features The World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013) contains nearly 200 binary and integer features for over 2000 languages.", "Similarly to the Bible situation, not all features are present for all languages-and for some of our Bibles, no information can be found at all.", "We therefore restrict our attention to two well-annotated WALS features that are present in enough of our Bible languages (foregoing Europarl to keep the analysis simple): 26A \"Prefixing vs. Suffixing in Inflectional Morphology\" and 81A \"Order of Subject, Object and Verb.\"", "The results are again not quite as striking as we would hope.", "In particular, in Mood's median null hypothesis significance test neither 26A (p > .3 / .7 for BPE/char-RNNLM) nor 81A (p > .6 / .2 for BPE/char-RNNLM) show any significant differences between categories (detailed results in Appendix F.1).", "We therefore turn our attention to much simpler, yet strikingly effective heuristics.", "Raw character sequence length An interesting correlation emerges between language difficulty 19 We also caution that the significance test for Pearson's assumes that the two variables are bivariate normal.", "If not, then even a significant r does not allow us to reject the null hypothesis of zero covariance (Kowalski, 1972, Figs.", "1-2, §5) .", "for the char-RNNLM and the raw length in characters of the test corpus (detailed results in Appendix F.2).", "On both Europarl and the more reliable Bible corpus, we have positive correlation for the char-RNNLM at a significance level of p < .001, passing the multiple-test correction.", "The BPE-RNNLM correlation on the Bible corpus is very weak, suggesting that allowing larger units of prediction effectively eliminates this source of difficulty (van Merriënboer et al., 2017) .", "Raw word inventory Our most predictive feature, however, is the size of the word inventory.", "To obtain this number, we count the number of distinct types |V | in the (tokenized) training set of a language (detailed results in Appendix F.3).", "20 While again there is little power in the small set of Europarl languages, on the bigger set of Bibles we do see the biggest positive correlation of any of our features-but only on the BPE model (p < 1e−11).", "Recall that the char-RNNLM has no notion of words, whereas the number of BPE units increases with |V | (indeed, many whole words are BPE units, because we do many merges but BPE stops at word boundaries).", "Thus, one interpretation is that the Bible corpora are too small to fit the parameters for all the units needed in large-vocabulary languages.", "A similarly predictive feature on Bibleswhose numerator is this word inventory size-is the type/token ratio, where values closer to 1 are a traditional omen of undertraining.", "An interesting observation is that on Europarl, the size of the word inventory and the morphological counting complexity of a language correlate quite well with each other (Pearson's ρ = .693 at p = .0005, Spearman's ρ = .666 at p = .0009), so the original claim in Cotterell et al.", "(2018) about MCC may very well hold true after all.", "Unfortunately, we cannot estimate the MCC for all the Bible languages, or this would be easy to check.", "21 Given more nuanced linguistic measures (or more languages), our methods may permit discovery of specific linguistic correlates of modeling difficulty, beyond these simply suggestive results.", "20 A more sophisticated version of this feature might consider not just the existence of certain forms but also their rates of appearance.", "We did calculate the entropy of the unigram distribution over words in a language, but we found that is strongly correlated with the size of the word inventory and not any more predictive.", "21 Perhaps in a future where more data has been annotated by the UniMorph project (Kirov et al., 2018) , a yet more comprehensive study can be performed, and the null hypothesis for the MCC can be ruled out after all.", "Evaluating Translationese Our previous experiments treat translated sentences just like natively generated sentences.", "But since Europarl contains information about which language an intent was originally expressed in, 22 here we have the opportunity to ask another question: is translationese harder, easier, indistinguishable, or impossible to tell?", "We tackle this question by splitting each language j into two sub-languages, \"native\" j and \"translated\" j, resulting in 42 sublanguages with 42 difficulties.", "23 Each intent is expressed in at most 21 sub-languages, so this approach requires a regresssion method that can handle missing data, such as the probabilistic approach we proposed in §3.", "Our mixed-effects modeling ensures that our estimation focuses on the differences between languages, controlling for content by automatically fitting the n i factors.", "Thus, we are not in danger of calling native German more complicated than translated German just because German speakers in Parliament may like to talk about complicated things in complicated ways.", "In a first attempt, we simply use our alreadytrained BPE-best models (as they perform the best and are thus most likely to support claims about the language itself rather than the shortcomings of any singular model), limit ourselves to only splitting the eight languages that have at least 500 native sentences 24 (to ensure stable results).", "Indeed we seem to find that native sentences are slightly more difficult: their d j is 0.027 larger (± 0.023, averaged over our selected 8 languages).", "But are they?", "This result is confounded by the fact that our RNN language models were trained mostly on translationese text (even the English data is mostly translationese).", "Thus, translationese might merely be different (Rabinovich and Wintner, 2015) -not necessarily easier to model, but overrepresented when training the model, making the translationese test sentences more predictable.", "To remove this confound, we must train our language 22 It should be said that using Europarl for translationese studies is not without caveats (Rabinovich et al., 2016) , one of them being the fact that not all language pairs are translated equally: a natively Finnish sentence is translated first into English, French, or German (pivoting) and only from there into any other language like Bulgarian.", "23 This method would also allow us to study the effect of source language, yielding d j←j for sentences translated from j into j.", "Similarly, we could have included surprisals from both models, jointly estimating d j,char-RNN and d j,BPE values.", "24 en (3256), fr (1650), de (1275), pt (1077), it (892), es (685), ro (661), pl (594) models on equal parts translationese and native text.", "We cannot do this for multiple languages at once, given our requirement of training all language models on the same intents.", "We thus choose to balance only one language-we train all models for all languages, making sure that the training set for one language is balanced-and then perform our regression, reporting the translationese and native difficulties only for the balanced language.", "We repeat this process for every language that has enough intents.", "We sample equal numbers of native and non-native sentences, such that there are ∼1M words in the corresponding English column (to be comparable to the PTB size).", "To raise the number of languages we can split in this way, we restrict ourselves here to fully-parallel Europarl in only 10 languages 25 instead of 21, thus ensuring that each of these 10 languages has enough native sentences.", "On this level playing field, the previously observed effect practically disappears (-0.0044 ± 0.022), leading us to question the widespread hypothesis that translationese is \"easier\" to model (Baker, 1993 ).", "26 Conclusion There is a real danger in cross-linguistic studies of over-extrapolating from limited data.", "We reevaluated the conclusions of Cotterell et al.", "(2018) on a larger set of languages, requiring new methods to select fully parallel data ( §4.2) or handle missing data.", "We showed how to fit a paired-sample multiplicative mixed-effects model to probabilistically obtain language difficulties from at-least-pairwise parallel corpora.", "Our language difficulty estimates were largely stable across datasets and language model architectures, but they were not significantly predicted by linguistic factors.", "However, a language's vocabulary size and the length in characters of its sentences were well-correlated with difficulty on our large set of languages.", "Our mixed-effects approach could be used to assess other NLP systems via parallel texts, separating out the influences on performance of language, sentence, model architecture, and training procedure.", "A A Note on Missing Data We stated that our model can deal with missing data, but this is true only for the case of data missing completely at random (MCAR), the strongest assumption we can make about missing data: the missingness of data is neither influenced by what the value would have been (had it not been missing), nor by any covariates.", "Sadly, this assumption is rarely met in real translations, where difficult, useless, or otherwise distinctive sentences may be skipped.", "This leads to data missing at random (MAR), where the missingness of a translation is correlated with the original sentence it should have been translated from-or even data missing not at random (MNAR), where the missingness of a translation is correlated with that translation, i.e., the original sentence was translated, but the translation was then deleted for a reason that depends on the translation itself).", "For this reason we use fully parallel data where possible; in fact, we only make use of the ability to deal with missing data in §6.", "27 B Regression, Model 3: Handling outliers cleverly Consider the problem of outliers.", "In some cases, sloppy translation will yield a y i j that is unusually high or low given the y i j values of other languages j .", "Such a y i j is not good evidence of the quality of the language model for language j since it has been corrupted by the sloppy translation.", "However, under Model 1 or 2, we could not simply explain this corrupted y i j with the random residual i j since large | i j | is highly unlikely under the Gaussian assumption of those models.", "Rather, y i j would have significant influence on our estimate of the per-language effect d j .", "This is the usual motivation for switching to L1 regression, which replaces the Gaussian prior on the residuals with a Laplace prior.", "28 How can we include this idea into our models?", "First let us identify two failure modes: (a) part of a sentence was omitted (or added) during translation, changing the n i additively; thus we should use a noisy n i + ν i j in place of n i in equations (1) and (5) 27 Note that this application counts as data MAR and not MCAR, thus technically violating our requirements, but only in a minor enough way that we are confident it can still be applied.", "28 An alternative would be to use a method like RANSAC to discard y i j values that do not appear to fit.", "(b) the style of the translation was unusual throughout the sentence; thus we should use a noisy n i · exp ν i j instead of n i in equations (1) and (5) In both cases ν i j ∼ Laplace(0, b), i.e., ν i j specifies sparse additive or multiplicative noise in ν i j (on language j only).", "29 Let us write out version (b), which is a modification of Model 2 (equations (1), (5) and (6) ): y i j = (n i · exp ν i j ) · exp(d j ) · exp( i j ) = n i · exp(d j ) · exp( i j + ν i j ) (7) ν i j ∼ Laplace(0, b) (8) σ 2 i = ln 1 + exp(σ 2 )−1 n i ·exp ν i j (9) i j ∼ N σ 2 −σ 2 i 2 , σ 2 i , (10) Comparing equation (7) to equation (1) , we see that we are now modeling the residual error in log y i j as a sum of two noise terms a i j = ν i j + i j and penalizing it by (some multiple of) the weighted sum of |ν i j | and 2 i j , where large errors can be more cheaply explained using the former summand, and small errors using the latter summand.", "30 The weighting of the two terms is a tunable hyperparameter.", "We did implement this model and test it on data, but not only was fitting it much harder and slower, it also did not yield particularly encouraging results, leading us to omit it from the main text.", "C Goodness of fit of our difficulty estimation models Figure 6 shows the log-probability of held-out data under the regression model, by fixing the estimated difficulties d j (and sometimes also the estimated variance σ 2 ) to their values obtained from training data, and then finding either MAP estimates or posterior means (by running HMC using STAN) of 29 However, version (a) is then deficient since it then incorrectly allocates some probability mass to n i + ν i j < 0 and thus y i j < 0 is possible.", "This could be fixed by using a different sparsity-inducing distribution.", "30 The cheapest penalty or explanation of the weighted sum δ|ν i j | + 1 2 2 i j for some weighting or threshold δ (which adjusts the relative variances of the two priors) is ν = 0 if |a| ≤ δ, ν = a − δ if a ≥ δ, and ν = −(a − δ) if a < −δ (found by minimizing δ|ν| + 1 2 (a − ν) 2 , a convex function of ν).", "This implies that we incur a quadratic penalty 1 2 a 2 if |a| ≤ δ, and a linear penalty δ(|a| − 1 2 δ) for the other cases; this penalty function is exactly the Huber loss of a, and essentially imposes an L2 penalty on small residuals and an L1 penalty on large residuals (outliers), so our estimate of d j will be something between a mean and a median.", "the other parameters, in particular n i for the new sentences i.", "The error bars are the standard deviations when running the model over different subsets of data.", "The \"simplex\" versions of regression in Figure 6 force all d j to add up to the number of languages (i.e., encouraging each one to stay close to 1).", "This is necessary for Model 1, which otherwise is unidentifiable (hence the enormous standard deviation).", "For other models, it turns out to only have much of an effect on the posterior means, not on the log-probability of held out data under the MAP estimate.", "For stability, we in all cases take the best result when initializing the new parameters randomly or \"sensibly,\" i.e., the n i of an intent i is initialized as the average of the corresponding sentences' y i j .", "D Data selection: Europarl In the \"Corrected & Structured Europarl Corpus\" (CoStEP) corpus (Graën et al., 2014) , sessions are grouped into turns, each turn has one speaker (that is marked with clean attributes like native language) and a number of aligned paragraphs for each language, i.e., the actual multitext.", "We ignore all paragraphs that are in ill-fitting turns (i.e., turns with an unequal number of paragraphs across languages, a clear sign of an incorrect alignment), losing roughly 27% of intents.", "After this cleaning step, only 14% of intents are represented in all 21 languages, see the distribution in Figure 7 (the peak at 11 languages is explained by looking at the raw number of sentences present in each language, shown in Figure 8 ).", "Since we want a fair comparison, we use the Finally, it should be said that the text in CoStEP itself contains some markup, marking reports, ellipses, etc., but we strip this additional markup to obtain the raw text.", "We tokenize it using the reversible language-agnostic tokenizer of 31 and split the obtained 78169 paragraphs into training set, development set for tuning our language models, and test set for our regression, again by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over sessions of the parliament and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "D.1 How are the source languages distributed?", "An obvious question we should ask is: how many \"native\" sentences can we actually find in Europarl?", "One could assume that there are as many native sentences as there are intents in total, but there are three issues with this: the first is that the president in any Europarl session is never annotated with name or native language (leaving us guessing what the native version of any president-uttered intent is; 12% of all intents in Europarl that can be extracted have this problem), the second is that a number of speakers are labeled with \"unknown\" as native language (10% of sentences), and finally some speakers have their native language annotated, but it is nowhere to be found in the corresponding sentences (7% of sentences).", "Looking only at the native sentences that we could identify, we can see that there are native sentences in every language, but unsurprisingly, some languages are overrepresented.", "Dividing the number of native sentences in a language by the number of total sentences, we get an idea of how \"natively spoken\" the language is in Europarl, shown in Figure 9 .", "E Data selection: Bibles The Bible is composed of the Old Testament and the New Testament (the latter of which has been much more widely translated), both consisting of individual books, which, in turn, can be separated into chapters, but we will only work with the smallest subdivision unit: the verse, corresponding roughly to a sentence.", "Turning to the collection assembled 31 http://sjmielke.com/papers/tokenize/ by Mayer and Cysouw (2014) , we see that it has over 1000 New Testaments, but far fewer complete Bibles.", "Despite being a fairly standardized book, not all Bibles are fully parallel.", "Some verses and sometimes entire books are missing in some Biblessome of these discrepancies may be reduced to the question of the legitimacy of certain biblical books, others are simply artifacts of verse numbering and labeling of individual translations.", "For us, this means that we can neither simply take all translations that have \"the entire thing\" (in fact, no single Bible in the set covers the union of all others' verses), nor can we take all Bibles and work with the verses that they all share (because, again, no single verse is shared over all given Bibles).", "The whole situation is visualized in Figure 10 .", "We have to find a tradeoff: take as many Bibles as possible that share as many verses as possible.", "Specifically, we cast this selection process as an optimization problem: select Bibles such that the number of verses overall (i.e., the number of verses shared times the number of Bibles) is maximal, breaking ties in favor of including more Bibles and ensuring that we have at least 20000 verses overall to ensure applicability of neural language models.", "This problem can be cast as an integer linear program and solved using a standard optimization tool (Gurobi) within a few hours.", "The optimal solution that we find contains 25996 verses for 106 Bibles in 62 languages, 32 spanning 13 language families.", "33 The sizes of the selected Bible subsets are visualized for each Bible in Figure 11 and in relation to other datasets in Table 1 .", "We split them into train/dev/test by dividing the data into blocks of 30 paragraphs and then taking 5 sentences for the development and test set each, leaving the remainder for the training set.", "This way we ensure uniform division over books of the Bible and sizes of 2 /3, 1 /6, and 1 /6, respectively.", "F Detailed regression results F.1 WALS We report the mean and sample standard deviation of language difficulties for languages that lie in the corresponding categories in Table 2 : 32 afr, aln, arb, arz, ayr, bba, ben, bqc, bul, cac, cak, ceb, ces, cmn, cnh, cym, dan, deu, ell, eng, epo, fin, fra, guj, gur, hat, hrv, hun, ind, ita, kek, kjb, lat, lit, mah, mam, mri, mya, nld, nor, plt, poh, por, qub, quh, quy, quz, ron, rus, som, tbz, tcw, tgl, tlh, tpi, tpm, ukr, vie, wal, wbm, xho, zom 33 22 Indo-European, 6 Niger-Congo, 6 Mayan, 6 Austronesian, 4 Sino-Tibetan, 4 Quechuan, 4 Afro-Asiatic, 2 Uralic, 2 Creoles, 2 Constructed languages, 2 Austro-Asiatic, 1 Totonacan, 1 Aymaran; we are reporting the first category on Ethnologue (Paul et al., 2009) F.2 Raw character sequence length We report correlation measures and significance values when regressing on raw character sequence length in F.3 Raw word inventory We report correlation measures and significance values when regressing on the size of the raw word inventory in Table 4 : Correlations and significances when regressing on the size of the raw word inventory." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "3.5", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "The Surprisal of a Sentence", "Multitext for a Fair Comparison", "Comparing Surprisal Across Languages", "Our Language Models", "Model 1: Multiplicative Mixed-effects", "Model 2: Heteroscedasticity", "Model 2L: An Outlier-Resistant Variant", "Estimating model parameters", "A Note on Bayesian Inference", "The Difficulties of 69 languages", "Europarl: 21 Languages", "The Bible: 62 Languages", "Results", "Are All Translations the Same?", "What Correlates with Difficulty?", "Evaluating Translationese", "Conclusion" ] }
GEM-SciDuet-train-58#paper-1115#slide-15
Conclusion cross linguistic comparisons are tricky hope we didnt mess up
1. Make sure your training data is comparable and fair. 2. Make sure your metrics are comparable and fair. 3. Make sure your stats are fair (no p-hacking!). 4. Work on more NLP resources for more languages!
1. Make sure your training data is comparable and fair. 2. Make sure your metrics are comparable and fair. 3. Make sure your stats are fair (no p-hacking!). 4. Work on more NLP resources for more languages!
[]
GEM-SciDuet-train-59#paper-1116#slide-0
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-0
NLU as Relationship Identification
Natural language inference (entailment) Premise: A woman is running in the park with her dog Hypothesis: A woman is sleeping Relation: entailment, neutral, contradiction No, he replied, except that he seems in a great hurry. Thats just it, Jimmy returned promptly. Did you ever see him hurry unless he was frightened? Peter confessed that he never had. Q: Well, he isnt now, yet just look at him go A: Do, case, confessed , frightened, m ean, replied, returned, said, see, thought Q: Is the girl walking the bike? Reading comprehension Visual question answering Assumption: Identifying the relationship requires nguage Visual understanding question answering
Natural language inference (entailment) Premise: A woman is running in the park with her dog Hypothesis: A woman is sleeping Relation: entailment, neutral, contradiction No, he replied, except that he seems in a great hurry. Thats just it, Jimmy returned promptly. Did you ever see him hurry unless he was frightened? Peter confessed that he never had. Q: Well, he isnt now, yet just look at him go A: Do, case, confessed , frightened, m ean, replied, returned, said, see, thought Q: Is the girl walking the bike? Reading comprehension Visual question answering Assumption: Identifying the relationship requires nguage Visual understanding question answering
[]
GEM-SciDuet-train-59#paper-1116#slide-1
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-1
One Sided Biases
Hypothesis-only NLI (Poliak+ 18; Gururangan+ 18; Tsuchia 18) Hypothesis: A woman is sleeping Reading comprehension (Kaushik & Lipton 18) Visual question answering (Zhang+ 16; Kafle & Kanan 16; Goyal+ 17; Agarwal+ 17; inter alia) Story cloze completion (Schwartz+ 17, Cai+ 17)
Hypothesis-only NLI (Poliak+ 18; Gururangan+ 18; Tsuchia 18) Hypothesis: A woman is sleeping Reading comprehension (Kaushik & Lipton 18) Visual question answering (Zhang+ 16; Kafle & Kanan 16; Goyal+ 17; Agarwal+ 17; inter alia) Story cloze completion (Schwartz+ 17, Cai+ 17)
[]
GEM-SciDuet-train-59#paper-1116#slide-2
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-2
Problem
One-sided biases mean that models may not learn the true relationship between premise and hypothesis
One-sided biases mean that models may not learn the true relationship between premise and hypothesis
[]
GEM-SciDuet-train-59#paper-1116#slide-3
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-3
Strategies for dealing with dataset bias
o o Other bias Construct new datasets (Sharma+ 18) Filter easy examples (Gururangan+ 18) o Hard to scale o May still have biases (see SWAG BERT HellaSWAG) Forgo datasets with known biases o Not all bias is bad o Biased datasets may have other useful information
o o Other bias Construct new datasets (Sharma+ 18) Filter easy examples (Gururangan+ 18) o Hard to scale o May still have biases (see SWAG BERT HellaSWAG) Forgo datasets with known biases o Not all bias is bad o Biased datasets may have other useful information
[]
GEM-SciDuet-train-59#paper-1116#slide-4
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-4
Our approach
Design models that facilitate learning less biased representations
Design models that facilitate learning less biased representations
[]
GEM-SciDuet-train-59#paper-1116#slide-5
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-5
A Generative Perspective
Typical NLI models maximize the discriminative likelihood Our key idea: If we generate the premise, it cannot be ignored We will maximize the likelihood of generating the premise Hypothesis: A woman is sleeping Premise: A woman is running in Relation: contradiction the park with her dog Unfortunately, text generation is hard! Premise: A woman is running in the park with her dog Premise: A woman sings a song while playing piano Premise: This woman is laughing at her baby Instead, rewrite as follows Assume p(P |H) is constant Need to estimate this
Typical NLI models maximize the discriminative likelihood Our key idea: If we generate the premise, it cannot be ignored We will maximize the likelihood of generating the premise Hypothesis: A woman is sleeping Premise: A woman is running in Relation: contradiction the park with her dog Unfortunately, text generation is hard! Premise: A woman is running in the park with her dog Premise: A woman sings a song while playing piano Premise: This woman is laughing at her baby Instead, rewrite as follows Assume p(P |H) is constant Need to estimate this
[]
GEM-SciDuet-train-59#paper-1116#slide-6
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-6
Method 1 Auxiliary Hypothesis Classifier
Learn a new estimator p,(y|H) Learn an additional classification layer
Learn a new estimator p,(y|H) Learn an additional classification layer
[]
GEM-SciDuet-train-59#paper-1116#slide-7
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-7
Method 2 Negative Sampling
Lower bound from Jensens inequality Approximate the expectation with uniform samples P
Lower bound from Jensens inequality Approximate the expectation with uniform samples P
[]
GEM-SciDuet-train-59#paper-1116#slide-8
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-8
What is this good for
Are less biased models more transferable?
Are less biased models more transferable?
[]
GEM-SciDuet-train-59#paper-1116#slide-11
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-11
Degradation in domain
SNLI Test SNLI Hard Baseline Auxiliary Hyp. Classifier Negative Sampling
SNLI Test SNLI Hard Baseline Auxiliary Hyp. Classifier Negative Sampling
[]
GEM-SciDuet-train-59#paper-1116#slide-12
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-12
Transfer to other datasets
When it works, it works well
When it works, it works well
[]
GEM-SciDuet-train-59#paper-1116#slide-13
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-13
Analysis
Q: Does it matter what kind of bias we have? A: Yes! Different biases than training data Usually, more improvement from our methods Q: Do stronger hyper-parameters help? A: More emphasis on the auxiliary objective More transferability, but worse in-domain performance Q: What if we get a bit of out-of-domain training data? A: Pre-training with our methods still helps Especially with datasets with different biases
Q: Does it matter what kind of bias we have? A: Yes! Different biases than training data Usually, more improvement from our methods Q: Do stronger hyper-parameters help? A: More emphasis on the auxiliary objective More transferability, but worse in-domain performance Q: What if we get a bit of out-of-domain training data? A: Pre-training with our methods still helps Especially with datasets with different biases
[]
GEM-SciDuet-train-59#paper-1116#slide-14
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-14
More Analysis
Q: Are biases really removed from the hidden representations? A: Some, but not all See our recent work: On Adversarial Removal of Hypothesis-only Bias in NLI, Q: Does this approach work for other tasks? A: Seems to work for VQA (Ramakrishnan+ 18) A: But there are shortcomings See our recent work: Adversarial Regularization for VQA: Strengths, Shortcomings, and Side Effects, SiVL 2019
Q: Are biases really removed from the hidden representations? A: Some, but not all See our recent work: On Adversarial Removal of Hypothesis-only Bias in NLI, Q: Does this approach work for other tasks? A: Seems to work for VQA (Ramakrishnan+ 18) A: But there are shortcomings See our recent work: Adversarial Regularization for VQA: Strengths, Shortcomings, and Side Effects, SiVL 2019
[]
GEM-SciDuet-train-59#paper-1116#slide-15
1116
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases-artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets. 1 * * Equal contribution 1 Our code is available at https://github.com/ azpoliak/robust-nli. 2 This hypothesis contradicts the premise and would likely not be inferred.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts (Cooper et al., 1996; Dagan et al., 2006) .", "In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone).", "2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.", "However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018) .", "3 For instance, in some datasets, negation words like \"not\" and \"nobody\" are often associated with a relationship of contradiction.", "As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.", "Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets (Sharma et al., 2018) is costly and may still result in other artifacts; filtering \"easy\" examples and defining a harder subset is useful for evaluation purposes (Gururangan et al., 2018) , but difficult to do on a large scale that enables training; and compiling adversarial examples (Glockner et al., 2018) is informative but again limited by scale or diversity.", "Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts.", "Typical NLI models learn to predict an entailment label discriminatively given a premisehypothesis pair (Figure 1a ), enabling them to learn hypothesis-only biases.", "Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts.", "While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases.", "Our first method uses a hypothesis-only classifier (Figure 1b ) and the second uses negative sampling by swapping premises between premisehypothesis pairs (Figure 1c ).", "Figure 1 : Illustration of (a) the baseline NLI architecture, and our two proposed methods to remove hypothesis only-biases from an NLI model: (b) uses a hypothesis-only classifier, and (c) samples a random premise.", "Arrows correspond to the direction of propagation.", "Green or red arrows respectively mean that the gradient sign is kept as is or reversed.", "Gray arrow indicates that the gradient is not back-propagated -this only occurs in (c) when we randomly sample a premise, otherwise the gradient is back-propagated.", "f and g represent encoders and classifiers.", "We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings.", "First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts.", "Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases.", "We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts.", "An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases.", "We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset.", "Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data.", "In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases.", "Elsewhere , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased.", "However, we caution that complete removal of biases remains difficult and is dependent on the techniques used.", "The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed.", "In summary, in this paper we make the follow-ing contributions: • Two novel methods to train NLI models that are more robust to dataset-specific artifacts.", "• An empirical evaluation of the methods on a synthetic dataset and 12 naturalistic datasets.", "• An extensive analysis of the effects of our methods on handling bias.", "Motivation A training instance for NLI consists of a hypothesis sentence H, a premise statement P , and an inference label y.", "A probabilistic NLI model aims to learn a parameterized distribution p θ (y | P, H) to compute the probability of the label given the two sentences.", "We consider NLI models with premise and hypothesis encoders, f P,θ and f H,θ , which learn representations of P and H, and a classification layer, g θ , which learns a distribution over y.", "Typically, this is done by maximizing this discriminative likelihood directly, which will act as our baseline ( Figure 1a ).", "However, many NLI datasets contain biases that allow models to perform non-trivially well when accessing just the hypotheses (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "This allows models to leverage hypothesis-only biases that may be present in a dataset.", "A model may perform well on a specific dataset, without identifying whether P entails H. Gururangan et al.", "(2018) argue that \"the bulk\" of many models' \"success [is] attribute[d] to the easy examples\".", "Consequently, this may limit how well a model trained on one dataset would perform on other datasets that may have different artifacts.", "Consider an example where P and H are strings from {a, b, c}, and an environment where P en-tails H if and only if the first letters are the same, as in synthetic dataset A.", "In such a setting, a model should be able to learn the correct condition for P to entail H. 4 Synthetic dataset A (a, a) → TRUE (a, b) → FALSE (b, b) → TRUE (b, a) → FALSE Imagine now that an artifact c is appended to every entailed H (synthetic dataset B).", "A model of y with access only to the hypothesis side can fit the data perfectly by detecting the presence or absence of c in H, ignoring the more general pattern.", "Therefore, we hypothesize that a model that learns p θ (y | P, H) by training on such data would be misled by the bias c and would fail to learn the relationship between the premise and the hypothesis.", "Consequently, the model would not perform well on the unbiased synthetic dataset A.", "Synthetic dataset B (with artifact) (a, ac) → TRUE (a, b) → FALSE (b, bc) → TRUE (b, a) → FALSE Instead of maximizing the discriminative likelihood p θ (y | P, H) directly, we consider maximizing the likelihood of generating the premise P conditioned on the hypothesis H and the label y: p(P | H, y).", "This objective cannot be fooled by hypothesis-only features, and it requires taking the premise into account.", "For example, a model that only looks for c in the above example cannot do better than chance on this objective.", "However, as P comes from the space of all sentences, this objective is much more difficult to estimate.", "Training Methods Our goal is to maximize log p(P | H, y) on the training data.", "While we could in theory directly parameterize this distribution, for efficiency and simplicity we instead write it in terms of the standard p θ (y | P, H) and introduce a new term to approximate the normalization: log p(P | y, H) = log p θ (y | P, H)p(P | H) p(y | H) .", "Throughout we will assume p(P | H) is a fixed constant (justified by the dataset assumption that, lacking y, P and H are independent and drawn at random).", "Therefore, to approximately maximize this objective we need to estimate p(y | H).", "We propose two methods for doing so.", "Method 1: Hypothesis-only Classifier Our first approach is to estimate the term p(y | H) directly.", "In theory, if labels in an NLI dataset depend on both premises and hypothesis (which Poliak et al.", "(2018b) call \"interesting NLI\"), this should be a uniform distribution.", "However, as discussed above, it is often possible to correctly predict y based only on the hypothesis.", "Intuitively, this model can be interpreted as training a classifier to identify the (latent) artifacts in the data.", "We define this distribution using a shared representation between our new estimator p φ,θ (y | H) and p θ (y | P, H).", "In particular, the two share an embedding of H from the hypothesis encoder f H,θ .", "The additional parameters φ are in the final layer g φ , which we call the hypothesis-only classifier.", "The parameters of this layer φ are updated to fit p(y | H) whereas the rest of the parameters in θ are updated based on the gradients of log p(P | y, H).", "Training is illustrated in Figure 1b .", "This interplay is controlled by two hyper-parameters.", "First, the negative term is scaled by a hyper-parameter α.", "Second, the updates of g φ are weighted by β.", "We therefore minimize the following multitask loss functions (shown for a single example): max θ L 1 (θ) = log p θ (y | P, H) − α log p φ,θ (y | H) max φ L 2 (φ) = β log p φ,θ (y | H) We implement these together with a gradient reversal layer (Ganin & Lempitsky, 2015) .", "As illustrated in Figure 1b , during back-propagation, we first pass gradients through the hypothesis-only classifier g φ and then reverse the gradients going to the hypothesis encoder g H,θ (potentially scaling them by β).", "5 Method 2: Negative Sampling As an alternative to the hypothesis-only classifier, our second method attempts to remove annotation artifacts from the representations by sampling alternative premises.", "Consider instead writing the normalization term above as, − log p(y | H) = − log P p(P | H)p(y | P , H) = − log E P p(y | P , H) ≥ −E P log p(y | P , H), where the expectation is uniform and the last step is from Jensen's inequality.", "6 As in Method 1, we define a separate p φ,θ (y | P , H) which shares the embedding layers from θ, f P,θ and f H,θ .", "However, as we are attempting to unlearn hypothesis bias, we block the gradients and do not let it update the premise encoder f P,θ .", "7 The full setting is shown in Figure 1c .", "To approximate the expectation, we use uniform samples P (from other training examples) to replace the premise in a (P , H)-pair, while keeping the label y.", "We also maximize p θ,φ (y | P , H) to learn the artifacts in the hypotheses.", "We use α ∈ [0, 1] to control the fraction of randomly sampled P 's (so the total number of examples remains the same).", "As before, we implement this using gradient reversal scaled by β. max θ L 1 (θ) = (1 − α) log p θ (y | P, H) − α log p θ,φ (y | P , H) max φ L 2 (φ) = β log p θ,φ (y | P , H) Finally, we share the classifier weights between p θ (y | P, H) and p φ,θ (y | P , H).", "In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.", "However, if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018) .", "If p φ,θ is not trained well, we might be fooled to think that the representation does not contain any biases, while in fact they are still hidden in the representation.", "For some evidence that this indeed happens when the models are trained separately, see .", "8 6 There are more developed and principled approaches in language modeling for approximating this partition function without having to make this assumption.", "These include importance sampling (Bengio & Senecal, 2003) , noisecontrastive estimation (Gutmann & Hyvärinen, 2010) , and sublinear partition estimation .", "These are more difficult to apply in the setting of sampling full sentences from an unknown set.", "We hope to explore methods for applying them in future work.", "7 A reviewer asked about gradient blocking.", "Our motivation was that, for a random premise, we do not have reliable information to update its encoder.", "However, future work can explore different configurations of gradient blocking.", "8 A similar situation arises in neural cryptography (Abadi , since it is known to contain significant annotation artifacts.", "We evaluate the robustness of our methods on other, target datasets.", "As target datasets, we use the 10 datasets investigated by Poliak et al.", "(2018b) in their hypothesisonly study, plus two test sets: GLUE's diagnostic test set, which was carefully constructed to not contain hypothesis-biases (Wang et al., 2018) , and SNLI-hard, a subset of the SNLI test set that is thought to have fewer biases (Gururangan et al., 2018) .", "The target datasets include humanjudged datasets that used automatic methods to pair premises and hypotheses, and then relied on humans to label the pairs: SCITAIL , ADD-ONE-RTE (Pavlick & Callison-Burch, 2016) 50 50 50 50 50 0.5 50 50 50 50 50 1 50 50 50 50 50 1.5 50 50 50 50 50 2 50 50 50 50 50 2.5 50 50 50 50 50 3 50 50 100 50 50 3.5 50 50 100 50 50 4 50 100 100 50 50 5 50 50 100 100 50 * 10 75 100 100 100 50 * 20 100 100 100 50 * 50 * (b) Method 2 Reisinger et al., 2015) .", "9 As many of these datasets have different label spaces than SNLI, we define a mapping (Appendix A.1) from our models' predictions to each target dataset's labels.", "Finally, we also test on the Multi-genre NLI dataset (MNLI; Williams et al., 2018) , a successor to SNLI.", "10 Baseline & Implementation Details We use InferSent (Conneau et al., 2017) as our baseline model because it has been shown to work well on popular NLI datasets and is representative of many NLI models.", "We use separate BiLSTM encoders to learn vector representations of P and H. 11 The vector representations are combined following Mou et al.", "(2016) , 12 and passed to an MLP classifier with one hidden layer.", "Our proposed 9 Detailed descriptions of these datasets can be found in Poliak et al.", "(2018b) .", "10 We leave additional NLI datasets, such as the Diverse NLI Collection (Poliak et al., 2018a) , for future work.", "11 Many NLI models encode P and H separately (Rocktäschel et al., 2016; Mou et al., 2016; Liu et al., 2016; Cheng et al., 2016; Chen et al., 2017) , although some share information between the encoders via attention Duan et al., 2018) .", "12 Specifically, representations are concatenated, subtracted, and multiplied element-wise.", "methods for mitigating biases use the same technique for representing and combining sentences.", "Additional implementation details are provided in Appendix A.2.", "For both methods, we sweep hyper-parameters α, β over {0.05, 0.1, 0.2, 0.4, 0.8, 1.0}.", "For each target dataset, we choose the best-performing model on its development set and report results on the test set.", "13 Results Synthetic Experiments To examine how well our methods work in a controlled setup, we train on the biased dataset (B), but evaluate on the unbiased test set (A).", "As expected, without a method to remove hypothesisonly biases, the baseline fails to generalize to the test set.", "Examining its predictions, we found that the baseline model learned to rely on the presence/absence of the bias term c, always predicting TRUE/FALSE respectively.", "Table 1 shows the results of our two proposed methods.", "As we increase the hyper-parameters α and β, our methods initially behave like the baseline, learning the training set but failing on the test set.", "However, with strong enough hyperparameters (moving towards the bottom in the tables), they perform perfectly on both the biased training set and the unbiased test set.", "For Method 1, stronger hyper-parameters work better.", "Method 2, in particular, breaks down with too many random samples (increasing α), as expected.", "We also found that Method 1 did not require as strong β as Method 2.", "From the synthetic experiments, it seems that Method 1 learns to ignore the bias c and learn the desired relationship between P and H across many configurations, while Method 2 requires much stronger β.", "Results on existing NLI datasets Table 2 (left block) reports the results of our proposed methods compared to the baseline in application to the NLI datasets.", "The method using the hypothesis-only classifier to remove hypothesis-only biases from the model (Method 1) outperforms the baseline in 9 out of 12 target datasets (∆ > 0), though most improvements are small.", "The training method using negative sampling (Method 2) only outperforms the baseline in 5 datasets, 4 of which are cases where the other method also outperformed the baseline.", "These gains are much larger than those of Method 1.", "We also report results of the proposed methods on the SNLI test set (right block).", "As our results improve on the target datasets, we note that Method 1's performance on SNLI does not drastically decrease (small ∆), even when the improvement on the target dataset is large (for example, in SPR).", "For this method, the performance on SNLI drops by just an average of 1.11 (0.65 STDV).", "For Method 2, there is a large decrease on SNLI as results drop by an average of 11.19 (12.71 STDV).", "For these models, when we see large improvement on a target dataset, we often see a large drop on SNLI.", "For example, on ADD-ONE-RTE, Method 2 outperforms the baseline by roughly 17% but performs almost 50% lower on SNLI.", "Based on this, as well as the results on the synthetic dataset, Method 2 seems to be much more unstable and highly dependent on the right hyper-parameters.", "Analysis Our results demonstrate that our approaches may be robust to many datasets with different types of bias.", "We next analyze our results and explore modifications to the experimental setup that may improve model transferability across NLI datasets.", "Interplay with known biases A priori, we expect our methods to provide the most benefit when a target dataset has no hypothesis-only biases or such biases that differ from ones in the training data.", "Previous work estimated the amount of bias in NLI datasets by comparing the performance of a hypothesis-only classifier with the majority baseline (Poliak et al., 2018b) .", "If the classifier outperforms the baseline, the dataset is said to have hypothesis-only biases.", "We follow a similar idea for estimating how similar the biases in a target dataset are to those in the source dataset.", "We compare the performance of a hypothesis-only classifier trained on SNLI and evaluated on each target dataset, to a majority baseline of the most frequent class in each target dataset's training set (Maj) .", "We also compare to a hypothesis-only classifier trained and tested on Figure 2 : Accuracies of majority and hypothesis-only baselines on each dataset (x-axis).", "The datasets are generally ordered by increasing difference between a hypothesis-only model trained on the target dataset (green) compared to trained on SNLI (yellow).", "each target dataset.", "14 Figure 2 shows the results.", "When the hypothesis-only model trained on SNLI is tested on the target datasets, the model performs below Maj (except for MNLI), indicating that these target datasets contain different biases than those in SNLI.", "The largest difference is on SPR: a hypothesis-only model trained on SNLI performs over 50% worse than one trained on SPR.", "Indeed, our methods lead to large improvements on SPR ( Table 2) , indicating that they are especially helpful when the target dataset contains different biases.", "On MNLI, this hypothesis-only model performs 10% above Maj, and roughly 20% worse compared to when trained on MNLI, suggesting that MNLI and SNLI have similar biases.", "This may explain why our methods only slightly outperform the baseline on MNLI ( Table 2) .", "The hypothesis-only model trained on each target dataset did not outperform Maj on DPR, ADD-ONE-RTE, SICK, and MPE, suggesting that these datasets do not have noticeable hypothesis-only biases.", "Here, as expected, we observe improvements when our methods are tested on these datasets, to varying degrees (from 0.45 on MPE to 31.11 on SICK).", "We also see improvements on datasets with biases (high performance of training on each dataset compared to the corresponding majority baseline), most noticeably SPR.", "The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.", "However, when we strengthen α and β (below), Method 1 outperforms the baseline.", "14 A reviewer noted that this method may miss similar bias \"types\" that are achieved through different lexical items.", "We note that our use of pre-trained word embeddings might mitigate this concern.", "Dataset Base Method Finally, both methods obtain improved results on the GLUE diagnostic set, designed to be biasfree.", "We do not see improvements on SNLI-hard, indicating it may still have biases -a possibility acknowledged by Gururangan et al.", "(2018) .", "Stronger hyper-parameters In the synthetic experiment, we found that increasing α and β improves the models' ability to generalize to the unbiased dataset.", "Does the same apply to natural NLI datasets?", "We expect that strengthening the auxiliary losses (L 2 in our methods) during training will hurt performance on the original data (where biases are useful), but improve on the target data, which may have different or no biases (Figure 2) .", "To test this, we increase the hyperparameter values during training; we consider the range {1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "15 While there are other ways to strengthen our methods, e.g., increasing the number or size of hidden layers (Elazar & Goldberg, 2018) , we are interested in the effect of α and β as they control how much bias is subtracted from our baseline model.", "Table 3 shows the results of Method 1 with stronger hyper-parameters on the existing NLI datasets.", "As expected, performance on SNLI test sets (SNLI and SNLI-hard in Table 3 ) decreases more, but many of the other datasets benefit from stronger hyper-parameters (compared with Table 2 ).", "We see the largest improvement on SICK, achieving over 10% increase compared to the 1.8% gain in Table 2 .", "As for Method 2, we found large drops in quality even in our basic configurations (Appendix A.3), so we do not increase the hyper-parameters further.", "This should not be too surprising, adding too many random premises will lead to a model's degradation.", "Fine-tuning on target datasets Our main goal is to determine whether our methods help a model perform well across multiple datasets by ignoring dataset-specific artifacts.", "In turn, we did not update the models' parameters on other datasets.", "But, what if we are given different amounts of training data for a new NLI dataset?", "To determine if our approach is still helpful, we updated four models on increasing sizes of training data from two target datasets (MNLI and SICK).", "All three training approaches-the baseline, Method 1, and Method 2-are used to pretrain a model on SNLI and fine-tune on the target dataset.", "The fourth model is the baseline trained only on the target dataset.", "Both MNLI and SICK have the same label spaces as SNLI, allowing us to hold that variable constant.", "We use SICK because our methods resulted in good gains on it (Table 2) .", "MNLI's large training set allows us to consider a wide range of training set sizes.", "16 Figure 3 shows the results on the dev sets.", "In MNLI, pre-training is very helpful when finetuning on a small amount of new training data, although there is little to no gain from pre-training with either of our methods compared to the baseline.", "This is expected, as we saw relatively small gains with the proposed methods on MNLI, and can be explained by SNLI and MNLI having similar biases.", "In SICK, pre-training with either of our 16 We hold out 10K examples from the training set for dev as gold labels for the MNLI test set are not publicly available.", "We evaluate on MNLI's matched dev set to assure consistent domains when fine-tuning.", "methods is better in most data regimes, especially with very small amounts of target training data.", "17 Related Work Biases and artifacts in NLU datasets Many natural language undersrtanding (NLU) datasets contain annotation artifacts.", "Early work on NLI, also known as recognizing textual entailment (RTE), found biases that allowed models to perform relatively well by focusing on syntactic clues alone (Snow et al., 2006; Vanderwende & Dolan, 2006) .", "Recent work also found artifacts in new NLI datasets (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018b) .", "Other NLU datasets also exhibit biases.", "In ROC Stories (Mostafazadeh et al., 2016) , a story cloze dataset, Schwartz et al.", "(2017b) obtained a high performance by only considering the candidate endings, without even looking at the story context.", "In this case, stylistic features of the candidate endings alone, such as the length or certain words, were strong indicators of the correct ending (Schwartz et al., 2017a; Cai et al., 2017) .", "A similar phenomenon was observed in reading comprehension, where systems performed non-trivially well by using only the final sentence in the passage or ignoring the passage altogether (Kaushik & Lipton, 2018) .", "Finally, multiple studies found non-trivial performance in visual question answering (VQA) by using only the question, without access to the image, due to question biases Kafle & Kanan, 2016 , 2017 Goyal et al., 2017; Agrawal et al., 2017) .", "Transferability across NLI datasets It has been known that many NLI models do not transfer across NLI datasets.", "Chen Zhang's thesis (Zhang, 2010) focused on this phenomena as he demonstrated that \"techniques developed for textual entailment\" datasets, e.g., RTE-3, do not transfer well to other domains, specifically conversational entailment (Zhang & Chai, 2009 , 2010 .", "Bowman et al.", "(2015) and Williams et al.", "(2018) demonstrated (specifically in their respective Tables 7 and 4) how models trained on SNLI and MNLI may not transfer well across other NLI datasets like SICK.", "Talman & Chatzikyriakidis (2018) recently reported similar findings using many advanced deep-learning models.", "Improving model robustness Neural networks are sensitive to adversarial examples, primarily in machine vision, but also in NLP (Jia & Liang, 2017; Belinkov & Bisk, 2018; Ebrahimi et al., 2018; Heigold et al., 2018; Mudrakarta et al., 2018; Ribeiro et al., 2018; Belinkov & Glass, 2019) .", "A common approach to improving robustness is to include adversarial examples in training (Szegedy et al., 2014; Goodfellow et al., 2015) .", "However, this may not generalize well to new types of examples (Xiaoyong Yuan, 2017; Tramr et al., 2018) .", "Domain-adversarial neural networks aim to increase robustness to domain change, by learning to be oblivious to the domain using gradient reversals (Ganin et al., 2016) .", "Our methods rely similarly on gradient reversals when encouraging models to ignore dataset-specific artifacts.", "One distinction is that domain-adversarial networks require knowledge of the domain at training time, while our methods learn to ignore latent artifacts and do not require direct supervision in the form of a domain label.", "Others have attempted to remove biases from learned representations, e.g., gender biases in word embeddings (Bolukbasi et al., 2016) or sensitive information like sex and age in text representations (Li et al., 2018) .", "However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018) .", "In contrast to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.", "(2018) .", "Recent work has applied adversarial learning to NLI.", "Minervini & Riedel (2018) generate ad-versarial examples that do not conform to logical rules and regularize models based on those examples.", "Similarly, Kang et al.", "(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.", "In contrast, we do not use external resources and we are interested in mitigating hypothesisonly biases.", "Finally, a similar approach has recently been used to mitigate biases in VQA (Ramakrishnan et al., 2018; Grand & Belinkov, 2019) .", "Conclusion Biases in annotations are a major source of concern for the quality of NLI datasets and systems.", "We presented a solution for combating annotation biases by proposing two training methods to predict the probability of a premise given an entailment label and a hypothesis.", "We demonstrated that this discourages the hypothesis encoder from learning the biases to instead obtain a less biased representation.", "When empirically evaluating our approaches, we found that in a synthetic setting, as well as on a wide-range of existing NLI datasets, our methods perform better than the traditional training method to predict a label given a premise-hypothesis pair.", "Furthermore, we performed several analyses into the interplay of our methods with known biases in NLI datasets, the effects of stronger bias removal, and the possibility of fine-tuning on the target datasets.", "Our methodology can be extended to handle biases in other tasks where one is concerned with finding relationships between two objects, such as visual question answering, story cloze completion, and reading comprehension.", "We hope to encourage such investigation in the broader community.", "A Appendix A.1 Mapping labels Each premise-hypothesis pair in SNLI is labeled as ENTAILMENT, NEUTRAL, or CONTRADIC-TION.", "MNLI, SICK, and MPE use the same label space.", "Examples in JOCI are labeled on a 5-way ordinal scale.", "We follow Poliak et al.", "(2018b) by converting it \"into 3-way NLI tags where 1 maps to CONTRADICTION, 2-4 maps to NEUTRAL, and 5 maps to ENTAILMENT.\"", "Since examples in SCI-TAIL are labeled as ENTAILMENT or NEUTRAL, when evaluating on SCITAIL, we convert the model's CONTRADICTION to NEUTRAL.", "ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.", "However, their label sets are ENTAILED and NOT-ENTAILED.", "In these cases, when the models predict ENTAILMENT, we map the label to ENTAILED, and when the models predict NEUTRAL or CONTRADICTION, we map the label to NOT-ENTAILED.", "A.2 Implementation details For our experiments on the synthetic dataset, the characters are embedded with 10-dimensional vectors.", "Input strings are represented as a sum of character embeddings, and the premise-hypothesis pair is represented by a concatenation of these embeddings.", "The classifiers are single-layer MLPs of size 20 dimensions.", "We train these models with SGD until convergence.", "For the traditional NLI datasets, we use pre-computed 300-dimensional GloVe embeddings (Pennington et al., 2014) .", "18 The sentence representations learned by the BiLSTM encoders and the MLP classifier's hidden layer have a dimensionality of 2048 and 512 respectively.", "We follow InferSent's training regime, using SGD with an initial learning rate of 0.1 and optional early stopping.", "See Conneau et al.", "(2017) for details.", "A.3 Hyper-parameter sweeps Here we provide 10-fold cross-validation results on a subset of the SNLI training data (50K sentences) with different settings of our hyperparameters.", "Figure 4b shows the dev set results with different configurations of Method 2.", "Notice that performance degrades quickly when we increase the fraction of random premises (large α).", "In contrast, the results with Method 1 (Figure 4a ) are more stable.", "18 Specifically, glove.840B.300d.zip." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "5.1", "5.2", "6", "6.1", "6.2", "6.3", "7", "8" ], "paper_header_content": [ "Introduction", "Motivation", "Training Methods", "Method 1: Hypothesis-only Classifier", "Method 2: Negative Sampling", "Synthetic Experiments", "Results on existing NLI datasets", "Analysis", "Interplay with known biases", "Stronger hyper-parameters", "Fine-tuning on target datasets", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-59#paper-1116#slide-15
Contributions
Our approach may aid with one-sided biases in NLI and other tasks Reduces the amount of bias Our analysis shows that the methods should be handled with care Not all bias may be removed Some other information may also be removed The goal matters: bias may sometimes be helpful
Our approach may aid with one-sided biases in NLI and other tasks Reduces the amount of bias Our analysis shows that the methods should be handled with care Not all bias may be removed Some other information may also be removed The goal matters: bias may sometimes be helpful
[]
GEM-SciDuet-train-60#paper-1117#slide-0
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-0
Task Description
Given a sentence Sentiment Polarity More fine-grained classes The food is very delicious. Positive The movie is so boring. Negative
Given a sentence Sentiment Polarity More fine-grained classes The food is very delicious. Positive The movie is so boring. Negative
[]
GEM-SciDuet-train-60#paper-1117#slide-1
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-1
Early Methods
Linguistic knowledge based-----Sentiment lexicon [Turney, 2002; Taboada et Recursive Neural Network [Socher et al. 2011] Convolutional Neural Network [Kim, 2014] Recurrent Neural Network/LSTM [Hochreiter and Schmidhuber, Incorporating Linguistic Knowledge with Neural Networks Linguistically regularized LSTM [Qian et al., 2017] Lexicon integrated CNN models with attention [Bonggun et al., 2017]
Linguistic knowledge based-----Sentiment lexicon [Turney, 2002; Taboada et Recursive Neural Network [Socher et al. 2011] Convolutional Neural Network [Kim, 2014] Recurrent Neural Network/LSTM [Hochreiter and Schmidhuber, Incorporating Linguistic Knowledge with Neural Networks Linguistically regularized LSTM [Qian et al., 2017] Lexicon integrated CNN models with attention [Bonggun et al., 2017]
[]
GEM-SciDuet-train-60#paper-1117#slide-2
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-2
Motivation
Sentiment linguistic knowledge (e.g. sentiment words, intensity words, negation words) play important roles in sentiment detection. By attention mechanism, we can integrate various sentiment resource information into neural networks to boost the performance.
Sentiment linguistic knowledge (e.g. sentiment words, intensity words, negation words) play important roles in sentiment detection. By attention mechanism, we can integrate various sentiment resource information into neural networks to boost the performance.
[]
GEM-SciDuet-train-60#paper-1117#slide-3
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-3
Our Model
The overall framework of our model
The overall framework of our model
[]
GEM-SciDuet-train-60#paper-1117#slide-6
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-6
Context sentiment correlation modeling
Note that in proceeding version, there are some typos in this part. The updated version can be obtained via arxiv.org: https://arxiv.org/abs/1807.04990
Note that in proceeding version, there are some typos in this part. The updated version can be obtained via arxiv.org: https://arxiv.org/abs/1807.04990
[]
GEM-SciDuet-train-60#paper-1117#slide-7
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-7
Multi sentiment resource attention
Intensity attention and Negation attention are computed via the similar methods with the sentiment word attention Finally, the multi-sentiment-resource enhanced sentence representation:
Intensity attention and Negation attention are computed via the similar methods with the sentiment word attention Finally, the multi-sentiment-resource enhanced sentence representation:
[]
GEM-SciDuet-train-60#paper-1117#slide-8
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-8
Training
The predicted sentiment polarity distribution can be obtained via a fully connected layer with softmax.
The predicted sentiment polarity distribution can be obtained via a fully connected layer with softmax.
[]
GEM-SciDuet-train-60#paper-1117#slide-9
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-9
Experiments
training/validation/test split is the same as (Qian et al., 2017) ; Sentiment words-----combined from (Hu and Liu, 2004) and Intensity words and Negation words manually collected due to the limited number.
training/validation/test split is the same as (Qian et al., 2017) ; Sentiment words-----combined from (Hu and Liu, 2004) and Intensity words and Negation words manually collected due to the limited number.
[]