{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:16.851435Z" }, "title": "Genres, Parsers, and BERT: The Interaction Between Parsers and BERT Models in Cross-Genre Constituency Parsing in English and Swedish", "authors": [ { "first": "Daniel", "middle": [], "last": "Dakota", "suffix": "", "affiliation": { "laboratory": "", "institution": "Uppsala University", "location": {} }, "email": "ddakota@lingfil.uu.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Genre and domain are often used interchangeably, but are two different properties of a text. Successful parser adaptation requires both cross-domain and cross-genre sensitivity (Rehbein and Bildhauer, 2017). While the impact domain differences have on parser performance degradation is more easily measurable in respect to lexical differences, impact of genre differences can be more nuanced. With the predominance of pre-trained language models (LMs; e.g. BERT (Devlin et al., 2019)), there are now additional complexities in developing cross-genre sensitive models due to the infusion of linguistic characteristics derived from, usually, a third genre. We perform a systematic set of experiments using two neural constituency parsers to examine how different parsers behave in combination with different BERT models with varying source and target genres in English and Swedish. We find that there is extensive difficulty in predicting the best source due to the complex interactions between genres, parsers, and LMs. Additionally, the influence of the data used to derive the underlying BERT model heavily influences how best to create more robust and effective cross-genre parsing models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Genre and domain are often used interchangeably, but are two different properties of a text. Successful parser adaptation requires both cross-domain and cross-genre sensitivity (Rehbein and Bildhauer, 2017). While the impact domain differences have on parser performance degradation is more easily measurable in respect to lexical differences, impact of genre differences can be more nuanced. With the predominance of pre-trained language models (LMs; e.g. BERT (Devlin et al., 2019)), there are now additional complexities in developing cross-genre sensitive models due to the infusion of linguistic characteristics derived from, usually, a third genre. We perform a systematic set of experiments using two neural constituency parsers to examine how different parsers behave in combination with different BERT models with varying source and target genres in English and Swedish. We find that there is extensive difficulty in predicting the best source due to the complex interactions between genres, parsers, and LMs. Additionally, the influence of the data used to derive the underlying BERT model heavily influences how best to create more robust and effective cross-genre parsing models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The performance degradation of models trained on one data set when used on another has been well established (Gildea, 2001; Petrov and Klein, 2007) . However, how we define the source of the problem (e.g. out-of-domain differences) is problematic. Within domain adaption, even the term domain is incredibly loosely defined (Ramponi and Plank, 2020) . This has allowed conflating several different properties of texts, as such properties can be difficult to distinguish given some of their inherent overlap. Relevant for this work is how we define the distinction between genre and domain.", "cite_spans": [ { "start": 109, "end": 123, "text": "(Gildea, 2001;", "ref_id": "BIBREF14" }, { "start": 124, "end": 147, "text": "Petrov and Klein, 2007)", "ref_id": "BIBREF32" }, { "start": 323, "end": 348, "text": "(Ramponi and Plank, 2020)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use Falkenjack et al. (2016) as a template and define genre dealing with more abstract and linguistic characteristics used within a text (Biber and Conrad, 2009) . Domain, however, is more about the topics and content words used. Much parsing work has actively focused on handling domain differences, such as reducing lexical gap issues between target and source domains (e.g. Candito et al. (2011) ), while explicit handling of genre differences is not as heavily researched, nor as well understood.", "cite_spans": [ { "start": 7, "end": 31, "text": "Falkenjack et al. (2016)", "ref_id": "BIBREF12" }, { "start": 140, "end": 164, "text": "(Biber and Conrad, 2009)", "ref_id": "BIBREF1" }, { "start": 380, "end": 401, "text": "Candito et al. (2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While much parsing literature uses the terms interchangeably (Rehbein and Bildhauer, 2017) , they are not, however, identical concepts. By doing so we are are not effectively identifying which out-ofdomain differences should be contributed more to out of genre or domain difference. For example, Wikipedia articles are written in a style following more of an encyclopedia. Pages on medicine and languages may contain very different vocabularies, but the linguistic characteristics are most likely similar given the encyclopedic style of writing. However, responses in a forum on medical advice may share a large amount of vocabulary overlap with Wikipedia medical pages, but would share considerably less linguistic structure given the dialogue nature of a forum (e.g. more interrogative sentences).", "cite_spans": [ { "start": 61, "end": 90, "text": "(Rehbein and Bildhauer, 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A treebank (in our case constituency treebanks) often contains many noticeable domains, but it is harder to gauge how many distinct genres are present. Sometimes they are explicitly marked in the annotation (Candito and Seddah, 2012; Telljohann et al., 2015) , while others explicitly separate out the different genres out (McDonald et al., 2011; Adesam et al., 2015 ). Yet a treebank is often times a concatenation of various texts that may ultimately represent slightly different linguistic characteristics (even if annotated strictly on newspaper).", "cite_spans": [ { "start": 207, "end": 233, "text": "(Candito and Seddah, 2012;", "ref_id": "BIBREF5" }, { "start": 234, "end": 258, "text": "Telljohann et al., 2015)", "ref_id": "BIBREF40" }, { "start": 323, "end": 346, "text": "(McDonald et al., 2011;", "ref_id": "BIBREF29" }, { "start": 347, "end": 366, "text": "Adesam et al., 2015", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Domain differences often result in a high lexi-cal divergence. The best performing chart-based grammar-based constituency parsers were predominantly unelexicalized (Petrov and Klein, 2007) , which helped reduce issues with lexical differences. However, current state-of-the-art neural span-based chart-based parsers have substantially changed this paradigm. With the use of lexical embeddings, there is now a great deal of lexicalization within the modeling architecture to an extent that was not seen before, with some state-of-the-art parsers not utilizing POS tags . The use of characters and subtoken information has been shown to be beneficial in reducing lexical sparsity issues (Vania et al., 2018) , which ultimately reduces domain difference disparities. However, what impact this degree of lexical contextualization has on cross-genre parsing remains unclear.", "cite_spans": [ { "start": 164, "end": 188, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF32" }, { "start": 685, "end": 705, "text": "(Vania et al., 2018)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Additionally, the use of language models (LMs), such as BERT (Devlin et al., 2019) , to derive contextualized embeddings presents yet another variable in selecting the best source. Given that LMs are derived on large, unannotated texts, they implicitly capture various linguistic properties of the these texts (Tenney et al., 2019a) which they they infuse into the parser via contextualized embeddings. How embeddings derived from, often, a single genre behave as a bridge between two, often, other genres presents an interesting issue.", "cite_spans": [ { "start": 61, "end": 82, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 310, "end": 332, "text": "(Tenney et al., 2019a)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are interested in examining the following in cross-genre experiments: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most parsing work 1 in parser adaptation has been more explicitly focused on issues of domain differences. Early techniques focused on selecting optimal source data to boost a target set (Plank and van Noord, 2011; McDonald et al., 2011) or parameter and model optimization to handle both general and domain specific features (Daum\u00e9 III, 2007; Kim et al., 2016) . Both delexicalized (Rosa and\u017dabokrtsk\u00fd, 2015) and lexicalized (Falenska and \u00c7 etinoglu, 2017 ) similarity metrics have shown the ability to select optimal source data.", "cite_spans": [ { "start": 187, "end": 214, "text": "(Plank and van Noord, 2011;", "ref_id": "BIBREF34" }, { "start": 215, "end": 237, "text": "McDonald et al., 2011)", "ref_id": "BIBREF29" }, { "start": 326, "end": 343, "text": "(Daum\u00e9 III, 2007;", "ref_id": "BIBREF8" }, { "start": 344, "end": 361, "text": "Kim et al., 2016)", "ref_id": "BIBREF19" }, { "start": 383, "end": 409, "text": "(Rosa and\u017dabokrtsk\u00fd, 2015)", "ref_id": null }, { "start": 426, "end": 456, "text": "(Falenska and \u00c7 etinoglu, 2017", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More recent work has been focused on creating domain specific embeddings. The use of domain embeddings in Chinese dependency parsing by Li et al. (2019) built on the previous research by Stymne et al. (2018) . Both showed domain and treebank specific embeddings respectively yielded better performance over direct treebank concatenation, as this allows for the capturing of domain specific and general features. Results were further improved upon with adversarial methods and BERT fine-tuning by . Joshi et al. (2018) found that contextualized embeddings have substantially reduced the difficulty in handling lexical gap issues between domains when the target and source are syntactically similar, but then employ additional strategies to handle more syntactically dissimilar ones. Additional work by Fried et al. (2019) showed that while pretrained LMs improved parser performance over several English domains, the improvements for outof-domain results were not relatively larger. As Rehbein and Bildhauer (2017) note, parser adaptation requires both genre and domain adaption, but that content features, such as topics, do not generalize well for genre modeling, suggesting different techniques are needed for cross-genre modeling.", "cite_spans": [ { "start": 136, "end": 152, "text": "Li et al. (2019)", "ref_id": "BIBREF24" }, { "start": 187, "end": 207, "text": "Stymne et al. (2018)", "ref_id": "BIBREF39" }, { "start": 498, "end": 517, "text": "Joshi et al. (2018)", "ref_id": "BIBREF17" }, { "start": 801, "end": 820, "text": "Fried et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "While the incorporation of pre-trained LMs has become standard in many NLP tasks, understanding how different models interact with different tasks is still an area of active research. Work by Martin et al. (2020) on French shows that a smaller French specific LM model derived from more diversified source data can compete with models substantially larger across a variety of downstream NLP tasks. Specifically, tasks which showed more divergence from Wikipedia benefited the most from a mixed genre LM. The importance of source diversification is also seen in LMs for Finnish (Virtanen et al., 2019) and Chinese (Cui et al., 2020) , each of which contain more text sources than simply Wikipedia. The impact of source diversification can also be seen in domain specific LMs, such as FinBERT , which was derived from several types of financial sources, as different financial texts are radically different stylistically.", "cite_spans": [ { "start": 192, "end": 212, "text": "Martin et al. (2020)", "ref_id": "BIBREF26" }, { "start": 577, "end": 600, "text": "(Virtanen et al., 2019)", "ref_id": "BIBREF44" }, { "start": 613, "end": 631, "text": "(Cui et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "One additional benefit by explicitly looking at specific genres is it can help further our understanding of linguistic properties of different texts, forcing us to re-evaluate earlier annotation schemes predominantly designed for an original treebank genre (R\u00fanarsson and Sigursson, 2020) .", "cite_spans": [ { "start": 257, "end": 288, "text": "(R\u00fanarsson and Sigursson, 2020)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We perform a systematic set of experiments for English and Swedish, using different neural constituency parsing architectures in combination with various BERT models to examine how this impacts cross-genre parsing. English is widely used in cross-domain and cross-genre research. Swedish, however, is not as thoroughly examined, yet possess multiple genre treebanks as well as BERT models, making it suitable for our research interests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We use two different neural span-based chart-based parsers, the Berkeley Neural Parser and the SuPar Neural CRF Parser .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsers", "sec_num": "3.1" }, { "text": "Berkeley Neural Parser uses a self-encoder and can incorporate BERT models to generate word representations. It uses the last layer embedding of the last subtoken to represent the word. 2 It decouples predicting the optimal representation of a span (i.e. input sequence) from predicting the optimal label, requiring only that the resultant output form a valid tree. This not only removes the underlying grammars found in traditional PCFG parsers, but also direct correlations between a constituent and a label (Fried et al., 2019) . A CKY (Kasami, 1965; Younger, 1967; Cocke and Schwartz, 1970) style inference algorithm is used at test time. Additionally, the parser allows the option of using POS tag prediction to be used as an auxiliary loss task (we use BNP and BNPno to represent with and without the POS loss respectively in our experiments).", "cite_spans": [ { "start": 510, "end": 530, "text": "(Fried et al., 2019)", "ref_id": "BIBREF13" }, { "start": 539, "end": 553, "text": "(Kasami, 1965;", "ref_id": "BIBREF18" }, { "start": 554, "end": 568, "text": "Younger, 1967;", "ref_id": "BIBREF46" }, { "start": 569, "end": 594, "text": "Cocke and Schwartz, 1970)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Parsers", "sec_num": "3.1" }, { "text": "SuPar Neural CRF Parser (SuPar) is a twostage parser, that, similarly to the Berkeley parser, produces a constituent and then a label. It uses a Scalar mix Tenney et al., 2019a,b) of the last four layers for each subtoken of a word. Additionally, it uses a BiLSTM encoder to compute context aware representations by employing two different MLP layers indicating both left and right word boundaries. Each candidate is scored over the two representations using a biaffine operation (Dozat and Manning, 2017) , while the CKY algorithm is used when parsing to obtain the best tree.", "cite_spans": [ { "start": 156, "end": 179, "text": "Tenney et al., 2019a,b)", "ref_id": null }, { "start": 480, "end": 505, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Parsers", "sec_num": "3.1" }, { "text": "We choose to experiment on two languages that contain treebanks representative of different genres, the English Webocorpus Treebank (Petrov and McDonlad, 2012) and the Koala Eukalyptus Corpus (Adesam et al., 2015) .", "cite_spans": [ { "start": 132, "end": 159, "text": "(Petrov and McDonlad, 2012)", "ref_id": "BIBREF33" }, { "start": 192, "end": 213, "text": "(Adesam et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Treebanks", "sec_num": "3.2" }, { "text": "English Webcorpus Treebank (EWT) was introduced in the 2012 shared task on Web Parsing and consists of five subareas: Yahoo answers, emails, Newsgroup texts, product reviews, and Weblog entries. The treebank follows an English Penn Treebank (Marcus et al., 1993) style annotation scheme with some additional POS tags to account for specific annotation needs, resulting in 50 POS tags and 28 phrase heads. We removed unary nodes, traces, and function labels during preprocessing.", "cite_spans": [ { "start": 241, "end": 262, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Treebanks", "sec_num": "3.2" }, { "text": "Swedish Eukalyptus Treebank (SET) consists of: blog entries from the SIC corpus (\u00d6stling, 2013), parts of Swedish Europarl (Koehn, 2005) , chapters from books, public information gathered from government and health information sites, and Wikipedia articles, and contains only 13 POS tags and 10 phrases heads. The treebank's annotation scheme is derived from the TiGer Treebank of German (Brants et al., 2004) . Notably this includes discontinuous constituents, resulting in the need to uncross the branches of the extracted treebank. We follow the procedure used for TiGer, namely the transformation process proposed by Boyd (2007) using treetools, 3 and additionally remove all function labels.", "cite_spans": [ { "start": 123, "end": 136, "text": "(Koehn, 2005)", "ref_id": "BIBREF21" }, { "start": 388, "end": 409, "text": "(Brants et al., 2004)", "ref_id": "BIBREF3" }, { "start": 621, "end": 632, "text": "Boyd (2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Treebanks", "sec_num": "3.2" }, { "text": "Data Splits The EWT is traditionally used as dev and test sets for examining the out-of-domain adaptability of models developed on the English PTB (Petrov and McDonlad, 2012) , and we are not aware of any standard splits for the EWT nor of standard splits for the SET. For this reason we chose to split each genre within the treebanks into approximately sequential 80/10/10 splits, with selected treebank statistics presented in Table 1 . For cross-genre experiments, EWT and SET subgenres are concatenated respectively.", "cite_spans": [ { "start": 147, "end": 174, "text": "(Petrov and McDonlad, 2012)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 429, "end": 436, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Treebanks", "sec_num": "3.2" }, { "text": "We use four different embeddings in our experiments: both bert-base-multilingual-cased and bert-base-cased (Devlin et al., 2019) , bert-largeswedish-uncased, 4 and bert-base-swedish-cased (Malmsten et al., 2020 bert-large-swedish-uncase (swBERT) was trained on Swedish Wikipedia (300M words).", "cite_spans": [ { "start": 107, "end": 128, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 188, "end": 210, "text": "(Malmsten et al., 2020", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "3.3" }, { "text": "bert-base-swedish-cased (kbBERT) was trained using newspapers (2,977 M words), government publications (117M words), legally available e-deposits (62M words), 7 internet forums 4 https://github.com/af-ai-center/ SweBERT 5 https://github.com/Kungbib/ swedish-bert-models 6 We do not consider this to mean there are 16 distinct genres as we define the term, rather to note the more diversified domains, though author style would naturally influence any learned representations.", "cite_spans": [ { "start": 270, "end": 271, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "3.3" }, { "text": "7 Including governmental releases, books, and magazines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "3.3" }, { "text": "(31M words), and Swedish Wikipedia (29M words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "3.3" }, { "text": "As noted in section 2, delexicalized comparisons of treebanks have been used to identify treebank similarity for source selection. An established delexicalized method is KL divergence (Kullback and Leibler, 1951) of POS trigrams (Rosa an\u010f Zabokrtsk\u00fd, 2015) . In Table 2 we present results for KL divergence for POS trigrams with the closest similar genre in bold. 8 Given that BERT works on a subtoken level, we additionally present the KL divergence for BERT subword tokens between genres. Each genre was tokenized using the specified BERT tokeinzer and counts were collected on subtokens. Identifying BERT subword tokens similarities provides insights into (sub)lexical level similarity, as well as how delexicalized and subword pattern with each other. 9 The row (y-axis) is the target genre, and the columns (x-axis) are the source (e.g. in Table 2 .4660 is Europarl target Blog source). 10 We see Figure 1 : Heat maps for EWT with mBERT on Dev Set Figure 2 : Heat maps for EWT with BERTbc on Dev Set that the patterns are rather similar, with simply the degree of divergence being larger between POS trigrams and sublexical tokens. The lone exception is that on the POS level, Wiki is a better source for Public while on the subtoken level, Europarl is. We can also see that a high subword dissimilarity does not necessarily predict a proportionally high POS dissimilarity. Fig. 1 shows heat maps representing transfer Fscores on the dev sets for different target and source genres. Note that the diagonals are NA values, not the minimum per axis given that the diagonal represents when the target and source are the same genre. We also present a setting All 11 in which all the genres are combined in the train and indicate the absolute F-score increase over the baseline, as well as Gap which indicates the absolute increase the All setting shows compared to the best source experiment.", "cite_spans": [ { "start": 184, "end": 212, "text": "(Kullback and Leibler, 1951)", "ref_id": "BIBREF22" }, { "start": 229, "end": 256, "text": "(Rosa an\u010f Zabokrtsk\u00fd, 2015)", "ref_id": null }, { "start": 364, "end": 365, "text": "8", "ref_id": null }, { "start": 756, "end": 757, "text": "9", "ref_id": null }, { "start": 892, "end": 894, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 845, "end": 852, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 902, "end": 910, "text": "Figure 1", "ref_id": null }, { "start": 953, "end": 961, "text": "Figure 2", "ref_id": null }, { "start": 1379, "end": 1385, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Delexicalized and Subtoken Divergence", "sec_num": "4" }, { "text": "We see that EWT using mBert does not correlate well with the KL divergences in Table 2 . There is seemingly a preference for either Answers or Reviews as the best source genre across experiments. Furthermore, no single parsing architecture can claim to be superior, as the best individual settings are quite varied across the parsers. The All setting results in the best over all performance, a trend that will continue through all results, but this is unsurprising given it has more training data across all cross-genre experiments. The individual Gap improvements show a large range of improvements, but also a lack of noted consistency about how much improvement the All setting has over the best source for each genre.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "EWT mBERT", "sec_num": "5.1" }, { "text": "We see noticeable improvements for all experiments when using an English specific BERT model (see Fig. 2 ), which is expected. However, improvements for individual settings vary greatly. In some cases, the improvements are greater than 2% absolute, while in others they are as small as .05%.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 104, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "EWT BERTbc", "sec_num": "5.2" }, { "text": "We also see a more noticeable trend to SuPar performing slightly better than BNP and BNPno in many experiments overall, particularly in the All setting, and shows consistent higher Gap increases. However, we see continued individual architectural strengths and consistency, such as that BNPno still shows strength on parsing Weblog, similar to that in Fig. 1 , and BNP actually shows identical best source genres as those for mBERT.", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 358, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "EWT BERTbc", "sec_num": "5.2" }, { "text": "We do, however, see more variation in source preferences for the other two architectures. For BNPno, Answers is no longer dominant and instead we see a great deal of variation, while for SuPar, we see a shift towards Weblog. This is particularly interesting given that Weblog is often the most dissimilar source in regards to KL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EWT BERTbc", "sec_num": "5.2" }, { "text": "Another interesting observation is that for both BNPno and BNP, Reviews is clearly not benefiting in the All setting as the other genres are. In fact, for BNP, we see it is is actually worse than the best source experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EWT BERTbc", "sec_num": "5.2" }, { "text": "In Fig. 3 we see results using multilingual BERT on Swedish. 12 An initial observation is that SuPar performs, overall, better than both BNPno and BNP, particularly in the All setting, though there may be individual settings in which a Berkeley parser setup performs better.", "cite_spans": [ { "start": 61, "end": 63, "text": "12", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 9, "text": "Fig. 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "SET mBERT", "sec_num": "5.3" }, { "text": "In terms of individual source experiments, we see a great deal of variation intra and inter parser. While Novels is the best source for Public across three experimental setings, for all other genres, at least one of the best sources is different for that specific genre across the parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SET mBERT", "sec_num": "5.3" }, { "text": "For BNPno we see that Wiki is the best source for Blog, even though it is furthest in subtoken similarity, and the second furthest in POS simi-larity. Yet when using Wiki as a source, BNPno outperforms its BNP counterpart in every single experiment, often substantially, except in the All setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SET mBERT", "sec_num": "5.3" }, { "text": "Europarl is seemingly a case where POS and subtoken divergences align with parser architectures in regards to lexicalization. For the BNP, Novels is preferred, which is just behind Public in POS divergence but substantially behind in subtoken divergence. However, both BNPno and SuPar prefer Public, which is by far the closest at the subtoken level, and actually perform relatively poorly using the other genres as sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SET mBERT", "sec_num": "5.3" }, { "text": "Generally, we see a decrease in performance for swBERT ( Fig. 4) compared to mBERT. However, the drop is perhaps not as significant in many cases as expected, especially given the size difference of the LMs. Also remembering that swBERT is uncased, and many cased models work better, we are unsure how much this contributes to performance degradation. However, there are still several settings in which swBERT out performs mBERT. BNP also shows more volatility compared to BNPno, but we still see the trend that in the All setting, it still performs better. One interesting observation is the lack of variation in the results in the BNPno experiments using any different source for Wiki data. Additionally, in three experiments Wiki is the best source for BNP, yet none of these sources were the best using mBERT. However, these three experiments substantially out perform their BNPno counterparts. The behavior of Blog is not intuitive, as it actually now benefits the most from one of the least similar sources in Public.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Fig. 4)", "ref_id": null } ], "eq_spans": [], "section": "SET swBERT", "sec_num": "5.4" }, { "text": "SuPar stays relatively consistent, with the only change that the best source for Novels switched from Public to Blog. However, this is actually a change to the most similar source genre, something that mBERT dispreferred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SET swBERT", "sec_num": "5.4" }, { "text": "All results for kbBERT ( Fig. 5 ) are substantially better than both mBERT and swBERT. SuPar again shows less volatility, with Public returning as the best source for Novels, and now Wiki being the best source of Europarl.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 31, "text": "Fig. 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "SET kbBERT", "sec_num": "5.5" }, { "text": "Wiki is the best source for Blog, but we must note that for both BNPno and BNP, it is barely better than Novels, while for SuPar it is substantially better. For BNPno, we see the best performing sources are slightly different than with mBERT as now Euorparl prefers Novels and Novels prefers Public instead of Blog. For BNP, Wiki benefits the most from Blog, even though it is the most dissimilar in regards to POS divergence. Another important observation however, is that the slight performance advantage SuPar had using mBERT and swBERT over the Berkeley parsers has been somewhat reduced, and in many settings a Berkeley parser out performs SuPar again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SET kbBERT", "sec_num": "5.5" }, { "text": "The different parsing architectures interact differently with the underlying latent properties of the embeddings in their parsing decisions. SuPar, however, does seem to show the most consistent stable performance across all experiments, and in a majority of cases, is the best performing model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Whether POS information is needed in neural constituency parsing is seemingly a complicated picture in terms of performance, though it has been shown to benefit certain neural dependency parsing architectures . However, we can see the impact the inclusion of the POS loss has in terms of parser source preferences, as seldom were the behaviors of BNPno and BNP similar. This is to be expected, as the source of the underlying LM may have implicitly different POS distributions than either the target or source genre, and a POS loss is most likely sensitive to these differences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "BNP showed much more stable source preferences across genres and experiments, indicating how the POS task is seemingly is able to mitigate, to some degree, the influence of the LM, though whether this is positive or negative is unclear. This may indicate that embeddings derived from more mono-genre texts interact in a more consistent way when using POS information, stablizing source preferences. This is seen in both English and Swedish experiments to a degree. However, once the LM has more genre representations, this stabilizing factor no longer holds as the inherent POS distributions are most likely far more varied, as seen with kbBERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We can also see how other architectural choices besides the inclusion of POS information are important, as otherwise we would expect BNPno and SuPar to behave similarly, which they do not. A clear distinction is how the two parsers incorporate BERT embeddings. The choice of Scalar mixing de Vries et al., 2020) , embedding averages (He and Choi, 2020) , and different subtoken selection (Hettiarachchi and Ranasinghe, 2020) have all shown to impact performance on NLP tasks. Another factor may be the additional word boundaries MLP layers created in SuPar's architecture, providing more context for an individual parsing decision, making it more robust to slight variations in syntactic distributions.", "cite_spans": [ { "start": 289, "end": 311, "text": "de Vries et al., 2020)", "ref_id": "BIBREF45" }, { "start": 333, "end": 352, "text": "(He and Choi, 2020)", "ref_id": "BIBREF15" }, { "start": 388, "end": 424, "text": "(Hettiarachchi and Ranasinghe, 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The influence of the LM's genre is perhaps most seen in the Swedish Wikipedia genre experiments with swBERT and BNP seen in Fig. 4 . All the sources produce in similar results when Wikipedia is the target. It may simply be that when the target genre is too similar to the genre of the LM, the impact of similar sizes of different source genres is minimized, as there now exist too much latent and explicit Wikipedia data. However, in the All setting, we see a substantial increase where now there is not only more data, but more diversity to counterbalance the Wikipedia derived LM. Additionally we see that Wikipedia is the preferred source for all genres outside of Blog for BNP, and results are substantially better than their BNPno counterparts. However, this does not hold across parsers, given that SuPar shows completely different behaviors for swBERT than the Berkeley experiments. This further emphasizes the difficulty of transferring knowledge of one parser's source preference behaviors to another.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 130, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For both languages there can be substantial deviation of the best performing source genre from the closest source genre on both a delexicalized and subtoken level. Overall gains specific sources show for an individual target source can also be incredibly inconsistent across experiments. Why this is, is further complicated by the source genres interaction with the LM. An English only BERT model yielded some improvements while a Swedish only model showed varying results depending upon the LM. This can be due to several factors. The most obvious one is due to the size of the LM, as swBERT is substantially smaller, yet it still yields results close to the much larger mBERT for Swedish. However, kbBERT is approximately the same size as BERTbc, and produces much larger absolute gains than BERTbc did for English, providing counter evidence that size is not the only factor. The difficulty in identifying the reasons is due to many interacting aspects such as higher baselines for English, treebank sizes, and annotation scheme complexity. However, kbBERT was derived from a far more diverse set of genres, many of which overlap with the SET, compared to BERTbc, which was derived from mostly two distinct genres, neither of which overlap with the EWT substantially.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Importantly, we also see the impact a LM model has on closing performance gaps between parsing architectures, as kbBERT results for the Berkeley parsers are overall more on par with SuPar. This demonstrates how interactions between three distinct genres makes optimal source selection far more difficult when using LMs and different parsers than established delexicalized approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We have performed a set of detailed experiments that explored the interaction between genres, parsers, and BERT models. We have shown that the LM plays the pivotal role in successful genre-sensitive parsing within our chosen parsing architectures. In addition, we have also shown that different architectures often behave dissimilarly, making determining best sources for a specific target reliant on better understanding the underlying architectures, and not transferring direct behavior of one parser to another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Treebanks are rather static, particularly constituency treebanks. While we have often seen incremental performance gains with every new parser, how successful we are at cross-genre parsing will, for the time being, be more related to our exploitation of various other sources and methods. LMs, for example, can be trained on vast amounts of unannotated data, allowing for the the LM to become far more sensitive to genre differences than any small treebank, especially as we have control over the creation of an LM, and less so than with a treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Perhaps the most practical way to currently create genre-sensitive parsing models is to better mix distinct genres within the data used to derive the LM. The LM itself does not even have to be overtly large, rather even small mixtures of other genres and domains provides noticeable benefits (Martin et al., 2020) . Future research will look to create multilingual cross-genre models that work across treebanks and genres. ", "cite_spans": [ { "start": 292, "end": 313, "text": "(Martin et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In the related work section we use the terms used in the original papers and not our definitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The authors note they found no difference between using the last and first subtoken.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/wmaier/treetools", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We followRosa and\u017dabokrtsk\u00fd (2015) and default the target genre to 1 in KL calculations.9 We note however that the two parsers do not necessarily use all the subtokens when generating embeddings.10 All future heat maps and tables have the same set-up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Tables containing full results are found in Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Tables containing full results are found in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author would like to thank Yvonne Adesam and Gerlof Bouma for providing the SET, Sandra K\u00fcbler and members of the Uppsala NLP Parsing Group: Joakim Nivre, Sara Stymne, and Artur Kulmizev for their feedback, and the anonymous reviewers for their comments. The author is supported by the Swedish strategic research programme eSSENCE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Defining the eukalyptus forest -the koala treebank of Swedish", "authors": [ { "first": "Yvonne", "middle": [], "last": "Adesam", "suffix": "" }, { "first": "Gerlof", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvonne Adesam, Gerlof Bouma, and Richard Johans- son. 2015. Defining the eukalyptus forest -the koala treebank of Swedish. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics (NODALIDA 2015), pages 1-9, Vilnius, Lithua- nia. Link\u00f6ping University Electronic Press, Sweden.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Register, Genre, and Style. Cambridge Textbooks in Linguistics", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Conrad", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1017/CBO9780511814358" ] }, "num": null, "urls": [], "raw_text": "Douglas Biber and Susan Conrad. 2009. Register, Genre, and Style. Cambridge Textbooks in Linguis- tics. Cambridge University Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discontinuity Revisited: An Improved Conversion to Context-free Representations", "authors": [ { "first": "Adriane", "middle": [], "last": "Boyd", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Linguistic Annotation Workshop, LAW '07", "volume": "", "issue": "", "pages": "41--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriane Boyd. 2007. Discontinuity Revisited: An Im- proved Conversion to Context-free Representations. In Proceedings of the Linguistic Annotation Work- shop, LAW '07, pages 41-44, Prague, Czech Repub- lic.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TIGER: Linguistic Interpretation of a German Corpus", "authors": [ { "first": "Sabine", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Dipper", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Eisenberg", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Hansen", "suffix": "" }, { "first": "Esther", "middle": [], "last": "K\u00f6nig", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Lezius", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Rohrer", "suffix": "" }, { "first": "George", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2004, "venue": "Journal of Language and Computation", "volume": "", "issue": "2", "pages": "597--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Brants, Stefanie Dipper, Peter Eisenberg, Silvia Hansen, Esther K\u00f6nig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans. Uszkoreit. 2004. TIGER: Linguistic Interpretation of a German Cor- pus. Journal of Language and Computation, 2004 (2):597-620.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A word clustering approach to domain adaptation: Effective parsing of biomedical texts", "authors": [ { "first": "Marie", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Enrique", "middle": [ "Henestroza" ], "last": "Anguiano", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 12th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "37--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie Candito, Enrique Henestroza Anguiano, and Djam\u00e9 Seddah. 2011. A word clustering approach to domain adaptation: Effective parsing of biomed- ical texts. In Proceedings of the 12th International Conference on Parsing Technologies, pages 37-42, Dublin, Ireland.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Le corpus sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical", "authors": [ { "first": "Marie", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Joint Conference JEP-TALN-RECITAL 2012", "volume": "", "issue": "", "pages": "321--334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie Candito and Djam\u00e9 Seddah. 2012. Le cor- pus sequoia : annotation syntaxique et exploita- tion pour l'adaptation d'analyseur par pont lexical. In Proceedings of the Joint Conference JEP-TALN- RECITAL 2012, pages 321-334, Grenoble, France.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Programming Languages and Their Compilers", "authors": [ { "first": "John", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Cocke and Jacob Schwartz. 1970. Programming Languages and Their Compilers. Technical report, Courant Institute of Mathematical Sciences, New York.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Revisiting pretrained models for Chinese natural language processing", "authors": [ { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "657--668", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi- jin Wang, and Guoping Hu. 2020. Revisiting pre- trained models for Chinese natural language process- ing. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 657-668, Online. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Frustratingly easy domain adaptation", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256-263, Prague, Czech Republic.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Min- nesota.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep biaffine attention for neural dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "5h International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5h International Conference on Learning Represen- tations (ICLR 2017), Toulon, France. Conference Track Proceedings.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Lexicalized vs. delexicalized parsing in low-resource scenarios", "authors": [ { "first": "Agnieszka", "middle": [], "last": "Falenska", "suffix": "" }, { "first": "", "middle": [], "last": "Etinoglu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agnieszka Falenska and\u00d6zlem \u00c7 etinoglu. 2017. Lexi- calized vs. delexicalized parsing in low-resource sce- narios. In Proceedings of the 15th International Conference on Parsing Technologies, pages 18-24, Pisa, Italy.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An exploratory study on genre classification using readability features", "authors": [ { "first": "Johan", "middle": [], "last": "Falkenjack", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Santini", "suffix": "" }, { "first": "Arne", "middle": [], "last": "J\u00f6nsson", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the The Sixth Swedish Language Technology Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Falkenjack, Marina Santini, and Arne J\u00f6nsson. 2016. An exploratory study on genre classification using readability features. In Proceedings of the The Sixth Swedish Language Technology Conference (SLTC 2016), Ume\u00e5, Sweden.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cross-domain generalization of neural constituency parsers", "authors": [ { "first": "Daniel1", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "323--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel1 Fried, Nikita Kitaev, and Dan Klein. 2019. Cross-domain generalization of neural constituency parsers. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 323-330, Florence, Italy.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpus Variation and Parser Performance", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "167--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea. 2001. Corpus Variation and Parser Per- formance. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 167-202, Pittsburgh, PA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Establishing strong baselines for the new decade: Sequence tagging, syntactic and semantic parsing with bert", "authors": [ { "first": "Han", "middle": [], "last": "He", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han He and Jinho D. Choi. 2020. Establishing strong baselines for the new decade: Sequence tagging, syntactic and semantic parsing with bert.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "BRUMS at SemEval-2020 task 3: Contextualised embeddings for predicting the (graded) effect of context in word similarity", "authors": [ { "first": "Hansi", "middle": [], "last": "Hettiarachchi", "suffix": "" }, { "first": "Tharindu", "middle": [], "last": "Ranasinghe", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "142--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2020. BRUMS at SemEval-2020 task 3: Contextualised embeddings for predicting the (graded) effect of context in word similarity. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 142-149, Barcelona (online). International Commit- tee for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Extending a parser to distant domains using a few dozen partially annotated examples", "authors": [ { "first": "Vidur", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1190--1199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1190-1199, Melbourne, Australia.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An Efficient Recognition and Syntax-Analysis Algorithm for Context-Free Languages", "authors": [ { "first": "Tadao", "middle": [], "last": "Kasami", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tadao Kasami. 1965. An Efficient Recognition and Syntax-Analysis Algorithm for Context-Free Lan- guages. Technical report, AFCRL-65-75, Air Force Cambridge Research Laboratory.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Frustratingly easy neural domain adaptation", "authors": [ { "first": "Young-Bum", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Stratos", "suffix": "" }, { "first": "Ruhi", "middle": [], "last": "Sarikaya", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "387--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 387-396, Osaka, Japan.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multilingual constituency parsing with self-attention and pre-training", "authors": [ { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3499--3505", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multi- lingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3499-3505, Florence, Italy.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Machine Translation Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Machine Transla- tion Summit, pages 79-86, Phuket, Thailand.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "On information and sufficiency", "authors": [ { "first": "Solomon", "middle": [], "last": "Kullback", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Leibler", "suffix": "" } ], "year": 1951, "venue": "", "volume": "", "issue": "", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Solomon Kullback and Richard Leibler. 1951. On in- formation and sufficiency. pages 79-85.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semisupervised domain adaptation for dependency parsing via improved contextualized word representations", "authors": [ { "first": "Ying", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3806--3817", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Li, Zhenghua Li, and Min Zhang. 2020. Semi- supervised domain adaptation for dependency pars- ing via improved contextualized word representa- tions. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 3806- 3817, Barcelona, Spain (Online).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Semi-supervised domain adaptation for dependency parsing", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xue", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2386--2395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenghua Li, Xue Peng, Min Zhang, Rui Wang, and Luo Si. 2019. Semi-supervised domain adaptation for dependency parsing. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2386-2395, Florence, Italy.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Finbert: A pre-trained financial language representation model for financial text mining", "authors": [ { "first": "Zhuang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Degen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kaiyu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhuang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20", "volume": "", "issue": "", "pages": "4513--4519", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuang Liu, Degen Huang, Kaiyu Huang, Zhuang Li, and Jun Zhao. 2020. Finbert: A pre-trained fi- nancial language representation model for financial text mining. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intel- ligence, IJCAI-20, pages 4513-4519. International Joint Conferences on Artificial Intelligence Organi- zation. Special Track on AI in FinTech.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Playing with words at the national library of sweden -making a swedish bert", "authors": [ { "first": "Martin", "middle": [], "last": "Malmsten", "suffix": "" }, { "first": "Love", "middle": [], "last": "B\u00f6rjeson", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Haffenden", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Malmsten, Love B\u00f6rjeson, and Chris Haffenden. 2020. Playing with words at the national library of sweden -making a swedish bert.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Anno- tated Corpus of English: The Penn Treebank. Com- putational Linguistics, 19(2):313-330. Penn Tree- bank.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Yoann", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7203--7219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multi-source transfer of delexicalized dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "62--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62-72, Edinburgh, Scotland, UK.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Stagger: an open-source part of speech tagger for Swedish", "authors": [ { "first": "", "middle": [], "last": "Robert\u00f6stling", "suffix": "" } ], "year": 2013, "venue": "Northern European Journal of Language Technology", "volume": "3", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert\u00d6stling. 2013. Stagger: an open-source part of speech tagger for Swedish. Northern European Journal of Language Technology, 3:1-18.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 2227-2237, New Or- leans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Improved Inference for Unlexicalized Parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved Infer- ence for Unlexicalized Parsing. In Proceedings of Human Language Technologies 2007: The Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 404-411, Rochester, NY.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Overview of the 2012 shared task on Parsing the Web", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonlad", "suffix": "" } ], "year": 2012, "venue": "Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Ryan McDonlad. 2012. Overview of the 2012 shared task on Parsing the Web. In Notes of the First Workshop on Syntactic Analysis of Non- Canonical Language (SANCL), Sapporo, Japan.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Effective measures of domain similarity for parsing", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "", "middle": [], "last": "Gertjan Van Noord", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1566--1576", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank and Gertjan van Noord. 2011. Effec- tive measures of domain similarity for parsing. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Language Technologies, pages 1566-1576, Portland, Oregon, USA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural unsupervised domain adaptation in NLP-A survey", "authors": [ { "first": "Alan", "middle": [], "last": "Ramponi", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "6838--6855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ramponi and Barbara Plank. 2020. Neural unsu- pervised domain adaptation in NLP-A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Data point selection for genre-aware parsing", "authors": [ { "first": "Ines", "middle": [], "last": "Rehbein", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Bildhauer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories", "volume": "", "issue": "", "pages": "95--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ines Rehbein and Felix Bildhauer. 2017. Data point selection for genre-aware parsing. In Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 95-105, Prague, Czech Republic.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "KLcpos3 -a language similarity measure for delexicalized parser transfer", "authors": [ { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "", "middle": [], "last": "Zden\u011bk\u017eabokrtsk\u00fd", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "243--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudolf Rosa and Zden\u011bk\u017dabokrtsk\u00fd. 2015. KLcpos3 -a language similarity measure for delexicalized parser transfer. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing, pages 243-249, Beijing, China.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Parsing Icelandic alingi transcripts: Parliamentary speeches as a genre", "authors": [ { "first": "Kristj\u00e1n", "middle": [], "last": "R\u00fanarsson", "suffix": "" }, { "first": "", "middle": [], "last": "Einar Freyr Sigursson", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second ParlaCLARIN Workshop", "volume": "", "issue": "", "pages": "44--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristj\u00e1n R\u00fanarsson and Einar Freyr Sigursson. 2020. Parsing Icelandic alingi transcripts: Parliamentary speeches as a genre. In Proceedings of the Second ParlaCLARIN Workshop, pages 44-50, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Parser training with heterogeneous treebanks", "authors": [ { "first": "Sara", "middle": [], "last": "Stymne", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Miryam De Lhoneux", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "619--625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Stymne, Miryam de Lhoneux, Aaron Smith, and Joakim Nivre. 2018. Parser training with heteroge- neous treebanks. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 619-625, Melbourne, Australia.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Stylebook for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z)", "authors": [ { "first": "Heike", "middle": [], "last": "Telljohann", "suffix": "" }, { "first": "Erhard", "middle": [], "last": "Hinrichs", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Zinsmeister", "suffix": "" }, { "first": "Kathrin", "middle": [], "last": "Beck", "suffix": "" } ], "year": 2015, "venue": "Seminar f\u00fcr Sprachwissenschaft, Universit\u00e4t T\u00fcbingen", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heike Telljohann, Erhard Hinrichs, Sandra K\u00fcbler, Heike Zinsmeister, and Kathrin Beck. 2015. Style- book for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z). Seminar f\u00fcr Sprachwissenschaft, Uni- versit\u00e4t T\u00fcbingen, Germany.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [ "R" ], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, Thomas R. McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "What do character-level models learn about morphology? the case of dependency parsing", "authors": [ { "first": "Clara", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Grivas", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2573--2583", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clara Vania, Andreas Grivas, and Adam Lopez. 2018. What do character-level models learn about morphol- ogy? the case of dependency parsing. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2573-2583, Brussels, Belgium.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Multilingual is not enough: BERT for Finnish", "authors": [ { "first": "Antti", "middle": [], "last": "Virtanen", "suffix": "" }, { "first": "Jenna", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Ilo", "suffix": "" }, { "first": "Jouni", "middle": [], "last": "Luoma", "suffix": "" }, { "first": "Juhani", "middle": [], "last": "Luotolahti", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Salakoski", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "What's so special about BERT's layers? a closer look at the NLP pipeline in monolingual and multilingual models", "authors": [ { "first": "Andreas", "middle": [], "last": "Wietse De Vries", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4339--4350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wietse de Vries, Andreas van Cranenburgh, and Malv- ina Nissim. 2020. What's so special about BERT's layers? a closer look at the NLP pipeline in mono- lingual and multilingual models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4339-4350, Online. Association for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Recognition and parsing of context-free languages in n 3 . Information and Control", "authors": [ { "first": "Daniel", "middle": [], "last": "Younger", "suffix": "" } ], "year": 1967, "venue": "", "volume": "10", "issue": "", "pages": "189--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Younger. 1967. Recognition and parsing of context-free languages in n 3 . Information and Con- trol, 10(2):189-208.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Fast and accurate neural crf constituency parsing", "authors": [ { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houquan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20", "volume": "", "issue": "", "pages": "4046--4053", "other_ids": { "DOI": [ "10.24963/ijcai.2020/560" ] }, "num": null, "urls": [], "raw_text": "Yu Zhang, Houquan Zhou, and Zhenghua Li. 2020. Fast and accurate neural crf constituency parsing. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI- 20, pages 4046-4053.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Is pos tagging necessary or even helpful for neural dependency parsing?", "authors": [ { "first": "Houquan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Natural Language Processing and Chinese Computing", "volume": "", "issue": "", "pages": "179--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Houquan Zhou, Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Is pos tagging necessary or even help- ful for neural dependency parsing? In Natural Lan- guage Processing and Chinese Computing, pages 179-191, Cham. Springer International Publishing.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "The IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "19--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Con- ference on Computer Vision (ICCV), pages 19-27.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Heat maps for SET with mBERT on Dev SetFigure 4: Heat maps for SET with swBERT on Dev Set", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Heat maps for SET with kbBERT on Dev Set", "uris": null }, "TABREF2": { "html": null, "num": null, "text": "Treebank statistics for EWT and SET genres with number of train sentences along with total tokens, token type ratios, and unique POS trigram ratios for training sets, as well as number of dev sentences", "type_str": "table", "content": "
TreebankAnswersEmailNewsgroup Reviews WeblogAnswersEmailNewsgroup Reviews Weblog
EWTAnswers0.2679.2622.2063.31800.3900.3539.2987.4416
Email.36890.3099.4755.4721.48200.4044.6236.6194
Newsgroup.4157.38450.4881.2539.4038.42150.4851.3213
Reviews.2279.3870.36130.4082.2978.4926.43900.5465
Weblog.4125.4523.1945.47380.4860.5767.3201.54750
BlogEuroparlPublicNovelsWikiBlogEuroparlPublicNovelsWiki
SETBlog0.4108.5060.2374.46330.4942.4942.4034.5134
Europarl.46600.2074.2165.2526.51090.3983.4965.5572
Public.4816.24430.2665.1494.4413.39300.4217.3983
Novels.1991.2388.27050.2596.3783.5003.44690.4342
Wiki.4197.3382.1622.27200.4257.5004.4179.46370
" }, "TABREF3": { "html": null, "num": null, "text": "", "type_str": "table", "content": "" }, "TABREF5": { "html": null, "num": null, "text": "A Full English Results for Parsers and BERT Models", "type_str": "table", "content": "
ParserBaseline Answers Email Newsgroup Reviews WeblogAllGap
BNPnoAnswers88.72NA89.5189.3889.5789.6790.66 (+1.94) +.99
Email89.2990.62NA89.9290.0389.9391.16 (+1.87) +.54
Newsgroup87.6188.6788.34NA86.9087.8289.05 (+1.44) +.38
Reviews88.6889.5289.2289.46NA89.1590.52 (+1.84) +1.00
Weblog91.1392.7592.8392.5592.61NA93.75 (+2.62) +.92
BNPAnswers89.01NA89.6389.4389.8189.3790.66 (+1.65) +.85
Email89.0389.86NA89.9490.4489.3591.21 (+2.18) +.77
Newsgroup86.5087.4686.75NA86.7587.5488.20 (+1.70) +.74
Reviews88.9489.8489.6589.62NA89.6090.20 (+1.26) +.36
Weblog90.6692.2292.1892.1692.47NA93.53 (+2.87) +1.06
SuParAnswers88.10NA89.1689.2389.6989.4490.54 (+2.44) +.85
Email89.0690.62NA90.2789.9890.0991.34 (+2.28) +.72
Newsgroup86.6888.1388.00NA87.4787.7889.42 (+2.74) +1.29
Reviews88.0689.3589.1989.08NA89.5790.39 (+2.33) +.82
Weblog90.5891.9591.9492.4192.05NA93.56 (+2.98) +1.15
" }, "TABREF6": { "html": null, "num": null, "text": "Full EWT Results with mBERT on Dev Set", "type_str": "table", "content": "
ParserBaseline Answers Email Newsgroup Reviews WeblogAllGap
BNPnoAnswers89.68NA90.4590.5190.5890.4191.48 (+1.80) +.90
Email88.7590.67NA90.1390.3190.5691.35 (+2.60) +.68
Newsgroup87.9689.0788.25NA88.9589.5190.29 (+2.33) +.78
Reviews89.7090.5390.6590.34NA90.0690.76 (+1.06) +.11
Weblog91.4893.1492.8793.2893.39NA95.01 (+3.53) +1.62
BNPAnswers89.60NA90.8790.4590.9390.2991.63 (+2.03) +.70
Email89.2890.10NA90.4190.5889.9391.58 (+2.30) +1.00
Newsgroup87.7489.0188.47NA88.6988.7389.85 (+2.11) +.84
Reviews89.8890.8390.2790.25NA90.0290.69 (+.81)-.14
Weblog91.2893.2393.1792.6993.32NA94.29 (+3.01) +.97
SuParAnswers89.95NA90.9890.9090.9391.1092.19 (+2.24) +1.09
Email90.1491.38NA91.2091.0691.2392.51 (+2.37) +1.13
Newsgroup88.5788.9088.97NA89.5589.8990.91 (+2.34) +1.02
Reviews89.2890.1790.3989.76NA90.6991.73 (+2.45) +1.04
Weblog91.8292.7992.8793.1792.67NA94.31 (+2.49) +1.14
" }, "TABREF7": { "html": null, "num": null, "text": "Full EWT Results with BERTbc on Dev Set B Full Swedish Results for Parsers and BERT Models", "type_str": "table", "content": "
ParserBaseline Blog Europarl Public Novels WikiAllGap
BNPnoBlog75.99NA76.6577.0978.01 78.14 79.22 (+3.23) +1.08
Europarl80.9182.60NA82.9782.44 81.98 83.97 (+3.06) +1.00
Public83.1184.5583.33NA85.81 84.34 85.34 (+2.23) -.47
Novels85.1488.0487.0387.67NA87.33 89.75 (+4.61) +1.71
Wiki82.1083.7584.3584.7484.89NA84.68 (+2.58) -.21
BNPBlog75.87NA77.4577.8678.80 78.18 79.57 (+3.70) +.77
Europarl80.7182.22NA83.2083.55 83.06 84.41 (+3.70) +.86
Public83.2583.9982.94NA86.51 84.77 86.72 (+3.47) +.21
Novels84.9488.9186.9986.81NA87.38 89.59 (+4.65) +.68
Wiki81.8483.5683.4283.8883.56NA85.03 (+3.19) +1.15
SuParBlog74.40NA77.5576.9077.77 79.18 80.13 (+5.73) +.95
Europarl82.5282.60NA84.3683.03 82.81 85.01 (+2.49) +.65
Public84.9585.9885.06NA86.88 86.43 87.99 (+3.04) +1.11
Novels86.5488.5888.6789.30NA88.12 90.80 (+4.51) +1.50
Wiki83.1384.6183.7285.5784.48NA86.12 (+2.99) +.55
" }, "TABREF8": { "html": null, "num": null, "text": "Full SET Results with mBERT on Dev Set", "type_str": "table", "content": "
ParserBaseline Blog Europarl Public Novels WikiAllGap
BNPnoBlog71.81NA73.8275.9374.62 76.07 77.87 (+6.06) +1.80
Europarl78.2178.73NA79.4380.78 79.89 82.12 (+3.91) +1.34
Public83.6084.1683.53NA81.99 83.09 86.30 (+2.70) +2.14
Novels81.7185.1383.3484.80NA85.08 87.86 (+6.15) +2.73
Wiki78.5280.3280.4780.4080.46NA82.97 (+4.45) +2.50
BNPBlog72.37NA74.6976.5576.19 75.79 79.12 (+6.75) +3.46
Europarl78.5680.70NA80.9780.66 81.85 83.01 (+4.45) +1.22
Public82.6683.7583.90NA83.34 84.51 86.98 (+4.32) +2.47
Novels82.7085.6984.9686.14NA86.33 88.44 (+5.74) +2.11
Wiki78.4580.0479.9282.1381.26NA83.13 (+4.68) +1.00
SuParBlog73.11NA74.8574.4173.72 75.86 80.21 (+7.10) +4.35
Europarl80.8681.89NA83.3581.95 82.20 84.32 (+3.46) +.97
Public83.3084.5483.22NA85.06 84.41 86.50 (+3.20) +1.44
Novels83.8286.2285.5785.97NA85.50 89.04 (+5.22) +2.82
Wiki80.3182.1882.1883.4881.77NA85.21 (+4.90) +1.73
" }, "TABREF9": { "html": null, "num": null, "text": "Full SET Results with swBERT on Dev Set", "type_str": "table", "content": "
ParserBaseline Blog Europarl Public Novels WikiAllGap
BNPnoBlog79.37NA81.6781.7082.63 82.68 84.25 (+4.88) +1.57
Europarl84.3886.60NA86.8287.34 86.63 86.96 (+2.58) +.14
Public89.0389.6689.85NA90.47 90.26 90.93 (+1.90) +.46
Novels87.2991.0089.0391.39NA90.53 92.74 (+5.45) +1.35
Wiki84.2186.2386.6686.4187.05NA87.77 (+3.56) +.72
BNPBlog78.63NA81.1881.7482.44 82.48 83.95 (+5.32) +1.47
Europarl84.6386.33NA87.0086.42 86.36 87.75 (+3.12) +.75
Public88.9089.7789.50NA89.86 89.00 91.35 (+2,45) +1.49
Novels88.3891.7889.8891.26NA91.39 92.41 (+4.03) +.63
Wiki85.4786.9486.6186.4386.50NA87.61 (+2.14) +.67
SuParBlog79.36NA81.7981.7682.64 83.28 84.45 (+5.09) +1.17
Europarl84.9285.90NA86.5486.54 86.74 88.18 (+3.26) +1.44
Public87.5388.4688.34NA90.00 89.32 90.74 (+3.21) +.74
Novels88.6490.6589.6591.67NA90.93 93.04 (+4.40) +1.37
Wiki85.9687.7086.8887.9387.37NA89.26 (+3.30) +1.33
" }, "TABREF10": { "html": null, "num": null, "text": "", "type_str": "table", "content": "" } } } }