ACL-OCL / Base_JSON /prefixI /json /in2writing /2022.in2writing-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
74 kB
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:33:01.605051Z"
},
"title": "Language Models as Context-sensitive Word Search Engines",
"authors": [
{
"first": "Matti",
"middle": [],
"last": "Wiegmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bauhaus-Universit\u00e4t Weimar",
"location": {}
},
"email": "matti.wiegmann@uni-weimar.de"
},
{
"first": "Michael",
"middle": [],
"last": "V\u00f6lske",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bauhaus-Universit\u00e4t Weimar",
"location": {}
},
"email": "michael.voelske@uni-weimar.de"
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bauhaus-Universit\u00e4t Weimar",
"location": {}
},
"email": "benno.stein@uni-weimar.de"
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leipzig University",
"location": {}
},
"email": "martin.potthast@uni-leipzig.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Context-sensitive word search engines are writing assistants that support word choice, phrasing, and idiomatic language use by indexing large-scale n-gram collections and implementing a wildcard search. However, search results become unreliable with increasing context size (e.g., n \u2265 5), when observations become sparse. This paper proposes two strategies for word search with larger n, based on masked and conditional language modeling. We build such search engines using BERT and BART and compare their capabilities in answering English context queries with those of the ngram-based word search engine Netspeak. Our proposed strategies score within 5 percentage points MRR of n-gram collections while answering up to 5 times as many queries. 1",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Context-sensitive word search engines are writing assistants that support word choice, phrasing, and idiomatic language use by indexing large-scale n-gram collections and implementing a wildcard search. However, search results become unreliable with increasing context size (e.g., n \u2265 5), when observations become sparse. This paper proposes two strategies for word search with larger n, based on masked and conditional language modeling. We build such search engines using BERT and BART and compare their capabilities in answering English context queries with those of the ngram-based word search engine Netspeak. Our proposed strategies score within 5 percentage points MRR of n-gram collections while answering up to 5 times as many queries. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A wide range of computer tools has been developed to support the writing process, including both active and passive ones. Active tools automatically paraphrase a text as it is written, if the text is highly likely to be incorrect or stylistically inappropriate. Passive tools suggest either spelling, grammar, and style corrections or how to continue a sentence. Passive tools that are less integrated into word processors are context-free and context-sensitive word search engines. Context-free search engines include searchable dictionaries, thesauri, and collections of idioms in which queries are made about a known word or phrase for which alternatives are sought. In the absence of context, their search results are usually sorted alphabetically. Contextsensitive word search engines allow their users to formulate cloze-style queries to search for an unknown word or phrase, ranking the search results according to their frequency of use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A conventional context-sensitive word search engine, as shown in Figure 1, Figure 1 : A context query q with result set D q as retrieved from an index \u00b5 of observed n-grams (right), and as predicted from, e.g., a language model (left).",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 74,
"text": "Figure 1,",
"ref_id": null
},
{
"start": 75,
"end": 83,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "query q = the * fox asking for words or phrases commonly written between 'the' and 'fox' by retrieving the appropriate subset D q \u2286 D from a collection of n-grams D. Formally, the index \u00b5 : Q \u2192 P(D) maps the set of cloze queries Q to the power set P(D), which is implemented as wildcard retrieval, and the results \u00b5(q) = D q are ordered by their occurrence frequency in a large text corpus, which approximates the frequency of use. Assuming a sufficiently large text corpus is available such that each n-gram matching a given cloze query q has been observed sufficiently often, ranking these n-grams by their frequency satisfies the probability ranking principle (Robertson, 1977) . In other words, if one asks a sufficiently large number of people to answer a cloze query, the frequency distribution of the answers would correlate with that of the n-grams found. The main limitations of this approach are, (1) that the number of context words in each cloze query is limited by n, with more context reducing the size of the cloze accordingly, and, (2) that the size of the text corpus required to observe q sufficiently often increases exponentially with n, so that in practice n < 10.",
"cite_spans": [
{
"start": 663,
"end": 680,
"text": "(Robertson, 1977)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, these two limitations are addressed by using transformer-based language models to predict phrases corresponding to a query, rather than retrieving them from an n-gram index. In particular, we propose a masked language model and an autoregressive model for conditional generation to answer cloze queries (Section 3). These models are compared to Netspeak, a state-of-the-art",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Netspeak dBERT dBERT f t BART BART f t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) this chinese <folk> new wikipedia force guy language restaurant language government girl word custom translation had man translation company dictionary language is style pronunciation context-sensitive word search engine based on an index of Google n-grams (Section 4). Based on the cloze test corpus CLOTH (Xie et al., 2018) and Wikitext (Merity et al., 2016) , both of our proposed language models achieve an MRR near their theoretical maximum, falling short of Netspeak's only between 0.03-0.07, and they exceed a mean nDCG of 0.3 in predicting Netspeak's D q (Section 5).",
"cite_spans": [
{
"start": 331,
"end": 349,
"text": "(Xie et al., 2018)",
"ref_id": null
},
{
"start": 363,
"end": 384,
"text": "(Merity et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 17,
"end": 207,
"text": "<folk> new wikipedia force guy language restaurant language government girl word custom translation had man translation company dictionary language is style pronunciation",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In general, context-sensitive word search engines are supportive writing assistants targeting the editing phase of the writing process (Rohman, 1965; Seow, 2002) . Supportive writing assistants take the form of online dictionaries, thesauri, concordancers (like WriteBetter (Bellino and Bascu\u00f1\u00e1n, 2020)), or other resources offering definitions, synonyms, and translations. More advanced assistants provide a tailored query language that allows for searching words matching a pattern (OneLook.com), words that rhyme (Rhymezone.com), or words that fit a given context (e.g., Netspeak (Stein et al., 2010) , Google n-gram viewer (Michel et al., 2011) , Linggle (Boisson et al., 2013) , and Phrasefinder. io). Context-sensitive word search is related to several foundational NLP tasks like lexical substitution ( Figure 2: Context-sensitive word search can be learned using masked (MLM) or conditional language modeling (CDLM) with denoising or infilling. The result set D q for MLM and denoising is the output at the mask's position sorted by likelihood. For infilling, D q is the generation target. Our proposed MLM is trained and finetuned as usual; Our CDLM is trained by denoising and finetuned by infilling, but predicts via denoising.",
"cite_spans": [
{
"start": 135,
"end": 149,
"text": "(Rohman, 1965;",
"ref_id": "BIBREF15"
},
{
"start": 150,
"end": 161,
"text": "Seow, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 583,
"end": 603,
"text": "(Stein et al., 2010)",
"ref_id": "BIBREF17"
},
{
"start": 627,
"end": 648,
"text": "(Michel et al., 2011)",
"ref_id": "BIBREF12"
},
{
"start": 659,
"end": 681,
"text": "(Boisson et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "2021), word sense disambiguation, paraphrasing, and phrase-level substitution (Madnani and Dorr, 2010) , although these tasks usually require a known word or phrase. Expression matching and corpus-based statistics form the basis for writing assistants, while language models, mostly based on the transformer architecture (Vaswani et al., 2017) , often take on the heavy lifting (Alikaniotis et al., 2019) . Transformer-encoder models, like BERT (Devlin et al., 2019) , are often pre-trained by masked language modeling, which is highly similar to wildcard word search but knows only one correct target. Encoder models are frequently applied to solve cloze tests (Gon\u00e7alo Oliveira, 2021) and its related foundational tasks. Autoregressive language models, like GPT (Radford et al., 2019) , are used for infilling (Donahue et al., 2020) , which is similar to mask prediction but generates arbitrary-length sequences. Conditional language models (autoencoders) are used in phrase-level substitution tasks like denoising (Lewis et al., 2019) .",
"cite_spans": [
{
"start": 78,
"end": 102,
"text": "(Madnani and Dorr, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 321,
"end": 343,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 378,
"end": 404,
"text": "(Alikaniotis et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 445,
"end": 466,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 764,
"end": 786,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 812,
"end": 834,
"text": "(Donahue et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 1017,
"end": 1037,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work, we formulate context-sensitive word search with language models as learning a distribution p(w q | q), where q = q l ? q r consists of left and right side contexts q l and q r and a wildcard token ?. Either q l or q r can be empty. The result set D q consist of all n-grams q l w q,i q l for all w q,i \u2208 w q , in descending order of likelihood. We propose two strategies to learn p(w q | q): via masked language modeling and via conditional language modeling with an adapted fine-tuning strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling for Word Search",
"sec_num": "3"
},
{
"text": "Masked Language Modeling Masked language modeling (MLM) is equivalent to context-sensitive word search with only a single token as the answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling for Word Search",
"sec_num": "3"
},
{
"text": "Since large language models based on transformerencoders solve MLM by learning p(w q | q) and scoring all options in the vocabulary, the scored vocabulary can be used to extract D q . As shown in Figure 2a , we use a bidirectional transformerencoder (BERT) model, pre-trained with MLM, to estimate p(w q | q). We extract the 30 tokens with the highest score from the output logits of the language modeling head as D q . We fine-tune the model with a specialized masked language modeling task, using individual n-grams as input. Although any BERT variant can be used, we choose DistilBERT for its size and speed, since contextsensitive word search is a real-time search task.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 205,
"text": "Figure 2a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Modeling for Word Search",
"sec_num": "3"
},
{
"text": "Conditional language modeling (CDLM) is causal (or generative) language modeling given a condition. Contextsensitive word search can be formulated as CDLM with two strategies: denoising (see Figure 2b ) and infilling (see Figure 2c) . Denoising takes the query as the condition and generates the original sequence, where D q can be extracted from the output logits at the mask's position, as with an MLM. Infilling takes the query as condition and generates D q . We use a sequence-to-sequence model for conditional generation (BART) and predict D q with denoising, extracting the 30 tokens with the highest score. We fine-tune BART using infilling, but use denoising to predict D q after the fine-tuning. ",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 200,
"text": "Figure 2b",
"ref_id": null
},
{
"start": 222,
"end": 232,
"text": "Figure 2c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conditional Language Modeling",
"sec_num": null
},
{
"text": "We implemented both strategies of learning contextsensitive word search using the Huggingface (Wolf et al., 2020) implementation of DistilBERT for MLM and BART for CDLM. We evaluate the pretrained and the fine-tuned models against the two datasets with word search queries shown in Table 2 .",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "(Wolf et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Data We constructed two datasets with word search queries. The original token (OT) dataset offers as the single answer the token chosen by the author of the source text. The ranked answers (RA) dataset offers multiple, ordered answers with relevance judgments for each query. The original token dataset consists of queries extracted from Wikitext-103 (Merity et al., 2016) , which consists of good or featured English Wikipedia articles, and CLOTH (Xie et al., 2018) , which consists of middle and high school learner's English cloze-tests. For Wikitext, we constructed n queries for each 3-to-9-gram by replacing the token at each position in the n-gram with a wildcard and adding the original token as the answer. We discarded all newlines, headlines starting with a =, n-grams with non-letter tokens to not cross sentence boundaries or quotations, and queries with proper nouns as answers. For CLOTH, we constructed a query for each 3 and 5-gram that overlapped with a cloze-gap in the dataset and added the teacher's preferred answer as the original token answer. We discarded all n-grams with non-letter tokens and proper nouns as answers. Each wildcard was assigned one of 5 word classes based on Spacy's POS annotations of the source sentences: verbs and auxiliaries, nouns, determiners and pronouns, adjectives and adverbs, and conjunctions and particles. Verb and noun classes were marked Rank on 5 Token Queries Rank on 3 Token Queries Figure 3 : The nDCG of the ranked results between the models on the ranked results test datasets. The relevance judgments were determined via Netspeak's ranking, which is equivalent to the frequencies in Google n-grams.",
"cite_spans": [
{
"start": 351,
"end": 372,
"text": "(Merity et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 448,
"end": 466,
"text": "(Xie et al., 2018)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1446,
"end": 1454,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "if the query contains another verb or noun, respectively. As the training set, we selected the first 10 million queries from the training split of Wikitext. As the dev set, we selected all queries extracted from Wikitext's dev split. As the test set, we used all 3 and 5-gram queries from Wikitext's test split and all CLOTH splits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "The ranked answers datasets consist of the queries from the original token dataset, but all answers were replaced by the top 30 results retrieved from Netspeak, which is equivalent to the most frequent observations in Google n-grams. We assigned a relevance score to each result based on its absolute frequency: above 100K we assigned a high (3) score, above 10K a medium (2) score, with any occurrence a low (1), and otherwise a zero (0) relevance score. We discarded all queries with an empty result set. We determined the splits analogously to the original token dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Model Configuration For the MLM strategy, we fine-tune Huggingface's implementation of DistilBERTForMaskedLM on the original token dataset, using the pre-trained distilbert-base-uncased checkpoint. We only exposed one n-gram as input at a time. We train the model using the standard training routine with default parameters, although we doubled the masking probability to 30 %, twice the rate used for BERT (Devlin et al., 2019) , and adapted the initial learning rate to 2e-5 and the weight decay to 0.01. We evaluate the performance once with the pre-trained checkpoint as dBERT and once after fine-tuning as dBERT f t .",
"cite_spans": [
{
"start": 407,
"end": 428,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "For the CDLM strategy, we fine-tune Huggingface's implementation of BARTFor-ConditionalGeneration for infilling on the ranked answers dataset using the pre-trained facebook/bart-base checkpoint. We only exposed one n-gram as input at a time and used the same hyperparameters as with the MLM strategy, except that masking was done manually. We evaluate the performance with the pre-trained checkpoint as BART and after fine-tuning as BART f t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We quantitatively evaluate our proposed methods using the mean reciprocal rank (MRR) and the normalized discounted cumulative gain (nDCG) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) .",
"cite_spans": [
{
"start": 138,
"end": 169,
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "System Performance We evaluate the system performance using the MRR of the author's chosen word, shown in Table 3 , assuming that the author's chosen word in the source text is also a good answer to the cloze query. Therefore, the better word search engine should rank the author's choice higher on average over many queries. Table 3 shows the MRR for the four models compared to Netspeak, once over all queries in the test datasets, and once for the shared subset of queries where Netspeak returned non-empty results.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 326,
"end": 333,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The MRR results allow three conclusions. First, our proposed fine-tuning strategy improves the pre-trained baseline's performance consistently for BART and on queries from Wikitext for dBERT. Second, on queries from RA, the best models already perform close to Netspeak. Third, both fine-tuned models can answer 4-5 times as many queries than Netspeak, which can be observed from the ratio between RA and OT datasets. Since the OT dataset contains up to 82% uncommon queries, which have no support in the Google ngrams indexed by Netspeak, the language models score up to 9 percentage points lower than on RA. The MRR increases with increasing context size since additional context can only reduce the set of potentially matching answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Ranking We evaluate the ranking of the results using the nDCG as shown in Figure 3 . Consistent with the MRR results, the fine-tuned models outperform their pre-trained counterpart, dBERT profits more from fine-tuning and performs best. Most of the relevant results are in the top ranks since the nDCG scores only marginally improve past rank 10.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Position and Word Class We evaluate further query attributes besides size and genre, wildcard position, and wildcard word class, using the MRR as shown in Figure 4 . These results show that a large part of the performance gain when fine-tuning can be attributed to gains in the closed-class words. The MRR is lower for open-class words since there are more plausible options for each query and the original token is on a lower rank more often. Finetuning has only a marginal impact on open-class words. dBERT scores the lowest when the wildcard is either at the beginning or at the end of the query, while BART scores the lowest for wildcards at the beginning. Fine-tuning significantly improves the performance in these cases, with only marginal improving queries with wildcards in the center positions.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The performance difference between closed and open-class words also partially explains the substantially lower MRR and nDCG scores over CLOTH queries for all models: The answers to cloth-queries more often belong to lower scoring open classes, the answers to Wikitext-queries more frequently belong to the high scoring closed classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Runtime We compare the runtime performance by measuring the average time to answer a query (see Table 3 ) over all queries in the ranked answers test dataset. Netspeak and dBERT are equally fast with 5 ms per query, while BART takes twice as long. In practice, both language models are fast enough for context-sensitive word search. We measured the performance of the language models with sequential, non-batched queries on GPU. We measured the performance of Netspeak with a local Netspeak instance and a local index, queried through Netspeak's GRPC API. All systems were tested in identical containers with 4 AMD EPYC 7F72 CPU cores, 32 GB of RAM, and one A100 GPU.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "This paper investigates whether state-of-the-art language models can mitigate the shortcomings of n-gram indices in context-sensitive word search engines. We present strategies to fine-tune masked and conditional language models so that they can answer word search queries. Our evaluation shows that our proposed methods can answer short queries (3 tokens) nearly as well as by observing actual n-gram frequencies in a large text corpus. Further-more, our fine-tuned models perform well when supporting observations are scarce so that n-gram indices provide no results. Since this already is the dominant case for n = 5, we can conclude that language models, fine-tuned for word search queries, are a suitable extension to context-sensitive word search engines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Context-sensitive word search engines provide easier access to language resources and our work extends this to data from language models. This implies an increased risk of leaking sensible data contained in the source data. We avoided training models to predict proper nouns to avoid that a model can be used to search for personal information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact Statement",
"sec_num": null
},
{
"text": "We use and combine data from Wikitext (i.e. Wikipedia), CLOTH, and the Google Web and Books n-grams, obtained from publicly available and appropriately acknowledged sources and according to their terms and conditions. Our derived systems and evaluation procedure may be susceptible to biases inherent in the data we used. We took no extra steps to de-bias the models or data used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact Statement",
"sec_num": null
},
{
"text": "Our code is available at Github and our data is available at Zenodo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The unreasonable effectiveness of transformer language models in grammatical error correction",
"authors": [
{
"first": "Dimitris",
"middle": [],
"last": "Alikaniotis",
"suffix": ""
},
{
"first": "Vipul",
"middle": [],
"last": "Raheja",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"R"
],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, BEA@ACL 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/w19-4412"
]
},
"num": null,
"urls": [],
"raw_text": "Dimitris Alikaniotis, Vipul Raheja, and Joel R. Tetreault. 2019. The unreasonable effectiveness of transformer language models in grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, BEA@ACL 2019. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Design and evaluation of writebetter: A corpus-based writing assistant",
"authors": [
{
"first": "Alessio",
"middle": [],
"last": "Bellino",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Bascu\u00f1\u00e1n",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "70216--70233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessio Bellino and Daniela Bascu\u00f1\u00e1n. 2020. Design and evaluation of writebetter: A corpus-based writing assistant. IEEE Access, 8:70216-70233.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Linggle: A Web-scale Linguistic Search Engine for Words in Context",
"authors": [
{
"first": "Joanne",
"middle": [],
"last": "Boisson",
"suffix": ""
},
{
"first": "Ting-Hui",
"middle": [],
"last": "Kao",
"suffix": ""
},
{
"first": "Jian-Cheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tzu-Hsi",
"middle": [],
"last": "Yen",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "139--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joanne Boisson, Ting-Hui Kao, Jian-Cheng Wu, Tzu-Hsi Yen, and Jason S. Chang. 2013. Linggle: A Web-scale Linguistic Search Engine for Words in Context. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 139-144, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enabling language models to fill in the blanks",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Mina",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2492--2501",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.225"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2492-2501, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Answering fill-in-the-blank questions in portuguese with transformer language models",
"authors": [
{
"first": "",
"middle": [],
"last": "Hugo Gon\u00e7alo Oliveira",
"suffix": ""
}
],
"year": 2021,
"venue": "Progress in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "739--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Gon\u00e7alo Oliveira. 2021. Answering fill-in-the-blank questions in portuguese with transformer language models. In Progress in Artificial Intelligence, pages 739-751, Cham. Springer International Publishing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cumulated gain-based evaluation of ir techniques",
"authors": [
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Kek\u00e4l\u00e4inen",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Trans. Inf. Syst",
"volume": "20",
"issue": "4",
"pages": "422--446",
"other_ids": {
"DOI": [
"10.1145/582415.582418"
]
},
"num": null,
"urls": [],
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422?446.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Swords: A benchmark for lexical substitution with improved data coverage and quality",
"authors": [
{
"first": "Mina",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Iyabor",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4362--4379",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.345"
]
},
"num": null,
"urls": [],
"raw_text": "Mina Lee, Chris Donahue, Robin Jia, Alexander Iyabor, and Percy Liang. 2021. Swords: A benchmark for lexical substitution with improved data coverage and quality. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4362-4379, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bonnie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "341--387",
"other_ids": {
"DOI": [
"10.1162/coli_a_00002"
]
},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani and Bonnie J. Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341-387.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2007 task 10: English lexical substitution task",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy and Roberto Navigli. 2007. SemEval-2007 task 10: English lexical substitution task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 48-53, Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. CoRR, abs/1609.07843.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Quantitative analysis of culture using millions of digitized books",
"authors": [
{
"first": "Jean-Baptiste",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Yuan Kui",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Aviva",
"middle": [],
"last": "Presser Aiden",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Veres",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"K"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"P"
],
"last": "Pickett",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Hoiberg",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Clancy",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Norvig",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Orwant",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Pinker",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"A"
],
"last": "Nowak",
"suffix": ""
},
{
"first": "Erez",
"middle": [],
"last": "Lieberman Aiden",
"suffix": ""
}
],
"year": 2011,
"venue": "Science",
"volume": "331",
"issue": "6014",
"pages": "176--182",
"other_ids": {
"DOI": [
"10.1126/science.1199644"
]
},
"num": null,
"urls": [],
"raw_text": "Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, null null, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. Quantitative analysis of culture using millions of digitized books. Science, 331(6014):176-182.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The probability ranking principle in ir",
"authors": [
{
"first": "",
"middle": [],
"last": "Stephen E Robertson",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of documentation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen E Robertson. 1977. The probability ranking principle in ir. Journal of documentation.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pre-writing the stage of discovery in the writing process. College composition and communication",
"authors": [
{
"first": "",
"middle": [],
"last": "D Gordon Rohman",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "16",
"issue": "",
"pages": "106--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Gordon Rohman. 1965. Pre-writing the stage of discovery in the writing process. College composition and communication, 16(2):106-112.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Methodology in language teaching: An anthology of current practice",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Seow",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "315--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Seow. 2002. The writing process and process writing. Methodology in language teaching: An anthology of current practice, pages 315-320.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Retrieving Customary Web Language to Assist Writers",
"authors": [
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Trenkmann",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Information Retrieval. 32nd European Conference on Information Retrieval",
"volume": "5993",
"issue": "",
"pages": "631--635",
"other_ids": {
"DOI": [
"10.1007/978-3-642-12275-0_64"
]
},
"num": null,
"urls": [],
"raw_text": "Benno Stein, Martin Potthast, and Martin Trenkmann. 2010. Retrieving Customary Web Language to Assist Writers. In Advances in Information Retrieval. 32nd European Conference on Information Retrieval (ECIR 2010), volume 5993 of Lecture Notes in Computer Science, pages 631-635, Berlin Heidelberg New York. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Perric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Teven",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gugger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.5931690"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Perric Cistac, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Large-scale cloze test dataset created by teachers",
"authors": [],
"year": null,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2344--2356",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1257"
]
},
"num": null,
"urls": [],
"raw_text": "Large-scale cloze test dataset created by teachers. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2344-2356, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "The MRR by word class (left) and wildcard position (center and right) of Netspeak and the four Models on the Original Token test dataset. Queries that Netspeak could not answer were ignored. The gray bars indicate the relative frequency.",
"uris": null
},
"TABREF2": {
"content": "<table><tr><td>: Selected context queries with the &lt;original token&gt; and the top 5 results of all models. The origi-nal token in the results is underlined, the overlap with Netspeak's results is boldface.</td></tr></table>",
"type_str": "table",
"text": "",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"text": "The original token (OT) dataset consists of ngram queries extracted from Wikitext-103 and CLOTH and lists the original token as the single answer. The ranked answers (RA) dataset is extracted from OT by replacing the answer with the ranked results retrieved from Netspeak, discarding all unanswered queries.",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">Wikitext</td><td colspan=\"2\">CLOTH</td></tr><tr><td/><td>3</td><td>5</td><td>3</td><td>5</td></tr><tr><td>BART ft</td><td colspan=\"4\">0.29 0.28 0.43 0.34 0.07 0.07 0.17 0.12 11.27 ms</td></tr><tr><td>Ratio</td><td>90 %</td><td>18 %</td><td>97 %</td><td>27 %</td></tr></table>",
"type_str": "table",
"text": "NA all NA all NA all NA all Time Netspeak 0.33 -0.46 -0.10 -0.22 -5.34 ms dBERT 0.15 0.14 0.33 0.28 0.06 0.06 0.17 0.15 -dBERT ft 0.30 0.29 0.42 0.35 0.05 0.05 0.10 0.08 5.05 ms BART 0.19 0.18 0.37 0.31 0.05 0.05 0.15 0.12 -",
"num": null,
"html": null
},
"TABREF7": {
"content": "<table/>",
"type_str": "table",
"text": "The average MRR of the original token for all queries in the OT test datasets, split by source and query length. NA \u2286 OT only considers queries that Netspeak could answer and Ratio indicates the subset size. Time indicates the average response time for one query.",
"num": null,
"html": null
}
}
}
}