{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:56.897525Z" }, "title": "PIE: A Parallel Idiomatic Expression Corpus for Idiomatic Sentence Generation and Paraphrasing", "authors": [ { "first": "Jianing", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": {} }, "email": "" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "", "affiliation": {}, "email": "hygong@fb.com" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": {} }, "email": "spbhat2@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Idiomatic expressions (IE) play an important role in natural language, and have long been a \"pain in the neck\" for NLP systems. Despite this, text generation tasks related to IEs remain largely under-explored. In this paper, we propose two new tasks of idiomatic sentence generation and paraphrasing to fill this research gap. We introduce a curated dataset of 823 IEs, and a parallel corpus with sentences containing them and the same sentences where the IEs were replaced by their literal paraphrases as the primary resource for our tasks. We benchmark existing deep learning models, which have state-of-the-art performance on related tasks using automated and manual evaluation with our dataset to inspire further research on our proposed tasks. By establishing baseline models, we pave the way for more comprehensive and accurate modeling of IEs, both for generation and paraphrasing. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Idiomatic expressions (IE) play an important role in natural language, and have long been a \"pain in the neck\" for NLP systems. Despite this, text generation tasks related to IEs remain largely under-explored. In this paper, we propose two new tasks of idiomatic sentence generation and paraphrasing to fill this research gap. We introduce a curated dataset of 823 IEs, and a parallel corpus with sentences containing them and the same sentences where the IEs were replaced by their literal paraphrases as the primary resource for our tasks. We benchmark existing deep learning models, which have state-of-the-art performance on related tasks using automated and manual evaluation with our dataset to inspire further research on our proposed tasks. By establishing baseline models, we pave the way for more comprehensive and accurate modeling of IEs, both for generation and paraphrasing. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Idiomatic expressions (IEs) make language natural. These are multiword expressions (MWEs) that are non-compositional because their meaning differs from the literal meaning of their constituent words taken together (Nunberg et al., 1994) . Their use imparts naturalness and fluency (Wray and Perkins, 2000; Sprenger, 2003; Pawley and Syder, 2014; Schmitt and Schmitt, 2020) , is prompted by pragmatic and topical functions in discourse (Simpson and Mendis, 2003) and often conveys a nuance in expression (stylistic enhancement) using imagery that is beyond what is available in the context (Nunberg et al., 1994) . Idiomatic expressions, including phrasal verbs (e.g., carry out), idioms (e.g., pull one's leg) are also an essential part of a native speakers vocabulary and lexicon (Jackendoff, 1995) . IEs constitute a ubiquitous part of daily language and social communication, primarily used in conversation, fiction and news (Biber et al., 1999) , frequently used by teachers when presenting their lessons to students (Kerbel and Grunwell, 1997) and occur cross-lingually (Baldwin et al., 2010; Nunberg et al., 1994) . Their non-compositionality is the reason for their classical standing as \"a pain in the neck\" (Sag et al., 2002) and \"hard going\" (Rayson et al., 2010) for NLP.", "cite_spans": [ { "start": 214, "end": 236, "text": "(Nunberg et al., 1994)", "ref_id": null }, { "start": 281, "end": 305, "text": "(Wray and Perkins, 2000;", "ref_id": "BIBREF65" }, { "start": 306, "end": 321, "text": "Sprenger, 2003;", "ref_id": "BIBREF58" }, { "start": 322, "end": 345, "text": "Pawley and Syder, 2014;", "ref_id": "BIBREF43" }, { "start": 346, "end": 372, "text": "Schmitt and Schmitt, 2020)", "ref_id": "BIBREF50" }, { "start": 435, "end": 461, "text": "(Simpson and Mendis, 2003)", "ref_id": "BIBREF57" }, { "start": 589, "end": 611, "text": "(Nunberg et al., 1994)", "ref_id": null }, { "start": 781, "end": 799, "text": "(Jackendoff, 1995)", "ref_id": "BIBREF21" }, { "start": 928, "end": 948, "text": "(Biber et al., 1999)", "ref_id": "BIBREF4" }, { "start": 1021, "end": 1048, "text": "(Kerbel and Grunwell, 1997)", "ref_id": "BIBREF25" }, { "start": 1075, "end": 1097, "text": "(Baldwin et al., 2010;", "ref_id": "BIBREF3" }, { "start": 1098, "end": 1119, "text": "Nunberg et al., 1994)", "ref_id": null }, { "start": 1216, "end": 1234, "text": "(Sag et al., 2002)", "ref_id": "BIBREF47" }, { "start": 1252, "end": 1273, "text": "(Rayson et al., 2010)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Oxford English dictionary defines the phrasal verb (an IE) vote out as 'To turn (a person) out of office.' Using Google translate 2 to translate the topical slogan \"vote them out!\" into eight of the world's most spoken and relatively resource-rich languages yielded the results shown in Figure 1 . As native speakers will attest, other than in Spanish, all the translations mean just the opposite, \"vote for them!\" This, and other studies on computational processing of idioms and metaphors in (Salton et al., 2014; Shao et al.; Shutova et al., 2013) reinforce the need for nuanced language processing-a grand challenge for NLP systems.", "cite_spans": [ { "start": 498, "end": 519, "text": "(Salton et al., 2014;", "ref_id": "BIBREF48" }, { "start": 520, "end": 532, "text": "Shao et al.;", "ref_id": "BIBREF52" }, { "start": 533, "end": 554, "text": "Shutova et al., 2013)", "ref_id": "BIBREF56" } ], "ref_spans": [ { "start": 291, "end": 299, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Gaining a deeper understanding of IEs and their literal counterparts is an important step toward this goal. In this paper, we introduce two novel tasks related to paraphrasing between literal and idiomatic expressions in unrestricted text: (1) Idiomatic sentence simplification (ISS) to automatically paraphrase idiomatic expressions in text, and 2) Idiomatic sentence generation (ISG) to replace a literal phrase in a sentence with a synonymous but more vivid phrase (e.g., an idiom). ISS directly addresses the need for performing text simplification in several application settings, including summarizers (Klebanov et al., 2004) and parsing (Constant et al., 2017) . Moreover, ISS may actually be helpful when an idiomatic expression does not have an exact counterpart in a target language. This is akin to the 'translation by paraphrase' strategy recommended for human translation when the source language idiom is obscure and non-existent in the target language (Baker, 2018) . On the other hand, ISG advances the area of text style transfer (Jhamtani et al., 2017; Gong et al., 2019) bringing the as yet unexplored dimension of nuanced language to style transfer.", "cite_spans": [ { "start": 608, "end": 631, "text": "(Klebanov et al., 2004)", "ref_id": "BIBREF26" }, { "start": 644, "end": 667, "text": "(Constant et al., 2017)", "ref_id": "BIBREF6" }, { "start": 967, "end": 980, "text": "(Baker, 2018)", "ref_id": "BIBREF2" }, { "start": 1047, "end": 1070, "text": "(Jhamtani et al., 2017;", "ref_id": "BIBREF22" }, { "start": 1071, "end": 1089, "text": "Gong et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A second important component of this paper is the introduction of a new curated dataset of parallel idiomatic and literal sentences, where the idiomatic expressions are paraphrased, created for the purpose of advancing progress in nuanced language processing and serving as a testbed for the proposed tasks. Recent literature has explored several aspects of figurative and nonliteral language processing, including detecting and interpreting metaphors (Shutova, 2010b; Shutova et al., 2013) , disambiguating IEs for their figurative or literal in a given context (Constant et al., 2017; Savary et al., 2017; Liu and Hwa, 2019) and analyzing sarcasm (Muresan et al., 2016; Joshi et al., 2017; Ghosh et al., 2018) , by using curated datasets of sentences with linguistic processes in the wild. These datasets are ill-suited for the proposed tasks because they consist of specific figurative constructions (metaphors) (Shutova, 2010a) , do not cover multiple IEs (Cook et al., 2008; Korkontzelos et al., 2013) , or are not parallel (Haagsma et al., 2020; Savary et al., 2017) underscoring the need for a new dataset.", "cite_spans": [ { "start": 452, "end": 468, "text": "(Shutova, 2010b;", "ref_id": "BIBREF55" }, { "start": 469, "end": 490, "text": "Shutova et al., 2013)", "ref_id": "BIBREF56" }, { "start": 563, "end": 586, "text": "(Constant et al., 2017;", "ref_id": "BIBREF6" }, { "start": 587, "end": 607, "text": "Savary et al., 2017;", "ref_id": "BIBREF49" }, { "start": 608, "end": 626, "text": "Liu and Hwa, 2019)", "ref_id": "BIBREF34" }, { "start": 649, "end": 671, "text": "(Muresan et al., 2016;", "ref_id": "BIBREF38" }, { "start": 672, "end": 691, "text": "Joshi et al., 2017;", "ref_id": "BIBREF23" }, { "start": 692, "end": 711, "text": "Ghosh et al., 2018)", "ref_id": "BIBREF12" }, { "start": 915, "end": 931, "text": "(Shutova, 2010a)", "ref_id": "BIBREF54" }, { "start": 960, "end": 979, "text": "(Cook et al., 2008;", "ref_id": "BIBREF7" }, { "start": 980, "end": 1006, "text": "Korkontzelos et al., 2013)", "ref_id": "BIBREF27" }, { "start": 1029, "end": 1051, "text": "(Haagsma et al., 2020;", "ref_id": "BIBREF16" }, { "start": 1052, "end": 1072, "text": "Savary et al., 2017)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The newly constructed dataset permits us to benchmark the performance of several state-ofthe-art neural network architectures (seq2seq and pretrained+fine-tuned models, with and without copy-enrichment) that have demonstrated compet-itive performance in the related tasks of simplification, and style transfer. Using automatic and manual evaluations of the outputs for the two tasks, we find that the existing models are inadequate for the proposed tasks. The sequence-to-sequence models clearly suffer from data sparsity, the added copy mechanism helps preserve the context that is not replaced, and despite their prior knowledge of the pretrained models, they are still limited in their ability to paraphrase and generate. This leads us to discussing novel insights, applications and future directions for related research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this work are summarized as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We propose two new tasks related to idiomatic expressions-idiomatic sentence simplification and idiomatic sentence generation;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We introduce a curated dataset of 823 idiomatic expressions, replete with sentences containing these IEs in the wild and the same sentences where the IEs were replaced by their literal paraphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We use the combination of the new dataset and the proposed tasks as a lens through which we gain novel insights about the capabilities of deep learning models for processing nuanced language generation and paraphrasing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose two new tasks: idiomatic sentence generation transforms a literal sentence into a sentence involving idioms. Used frequently in everyday language, idioms are known to add color to expressions and improve the fluency of communication. The idiomatic rewriting improves the quality of text generation in that it could enhance the textual diversity and convey abstract and complicated ideas in a succinct manner. For example, the idiomatic sentence BP cut corners and violated safety requirements conveys the same idea as its literal counterpart BP saved time, money and energy and violated safety requirements, but in a more vivid and succinct manner. The second task is idiomatic sentence paraphrasing, simplifying sentences with idioms into literal expressions. As an example, the sentence-It is certainly not a sensible move to cut corners with national securityhas the idiom cut corners replaced the literal counterpart save money. By paraphrasing the idioms from which machine translation often suffers, our task of idiomatic sentence paraphrasing can also benefit machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "In this work, we distinguish our task of idiomatic sentence generation from idiom generation. While the latter task creates new idioms with novel word combinations, our study is to use existing idioms in a sentence and preserve the semantic meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "The task of idiomatic sentence paraphrasing is closely related to text simplification that has mostly been studied as related tasks of lexical paraphrasing and syntactic paraphrasing (Xu et al., 2015) . A significant departure of this task from that of these related tasks that centrally address style is that (i) we aim for local synonymous paraphrasing by transforming not the entire sentence but a phrase in the sentence, (ii) the transformation is not related to syntactic structures, but related to the complexity in meaning 3 . We propose doing joint monolingual translation with simplification and is similar in spirit to (Agrawal and Carpuat, 2020) .", "cite_spans": [ { "start": 183, "end": 200, "text": "(Xu et al., 2015)", "ref_id": "BIBREF68" }, { "start": 629, "end": 656, "text": "(Agrawal and Carpuat, 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "There are many technical challenges to performing these tasks. The task of idiomatic sentence paraphrasing involves first identifying that an expression is an idiom and not a literal expression (e.g. black sheep) (Fazly et al., 2009; Korkontzelos et al., 2013; Liu and Hwa, 2019) . Once identified, the IE may have multiple senses (e.g. tick off ) and its appropriate sense will need to be identified before paraphrasing it. Third, an appropriate literal phrase will have to be generated to replace the IE. Finally, the literal phrase will have to be fit in the surrounding sentential context for a fluent construction. For idiomatic sentence generation, the context of the literal phrase could permit more than one candidate idiom (e.g. keep quiet). In this study, we assume that we have an idiomatic sentence and leave it to future work to explore the task in conjunction with this step.", "cite_spans": [ { "start": 213, "end": 233, "text": "(Fazly et al., 2009;", "ref_id": "BIBREF9" }, { "start": 234, "end": 260, "text": "Korkontzelos et al., 2013;", "ref_id": "BIBREF27" }, { "start": 261, "end": 279, "text": "Liu and Hwa, 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "2" }, { "text": "The theme of this paper is naturally connected to three streams of text generation tasksparaphrasing, style transfer and metaphoric expression generation. We will discuss these tasks and also the datasets used in these tasks to study their similarities and differences to our dataset and tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The aim of paraphrasing is to rewrite a given sentence while preserving its original meaning. Being widely studied in the recent research, many datasets have been constructed to facilitate the task. PPDB (Ganitkevitch et al., 2013) , MRPC 4 , Twitter URL Corpus (Lan et al., 2017) , Quora 5 and ParaNMT-50M (Wieting and Gimpel, 2017) have been the most commonly used datasets. The most commonly used Seq2Seq models have been successfully applied to paraphrasing Prakash et al. (2016) ; Gupta et al. (2018) ; Iyyer et al. 2018; Yang et al. (2019) . Besides the end-to-end models, a template-based pipeline model was proposed to divide paraphrase generation into template extraction, template transforming and template filling (Gu et al., 2019) .", "cite_spans": [ { "start": 204, "end": 231, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF10" }, { "start": 262, "end": 280, "text": "(Lan et al., 2017)", "ref_id": "BIBREF28" }, { "start": 307, "end": 333, "text": "(Wieting and Gimpel, 2017)", "ref_id": "BIBREF64" }, { "start": 462, "end": 483, "text": "Prakash et al. (2016)", "ref_id": "BIBREF44" }, { "start": 486, "end": 505, "text": "Gupta et al. (2018)", "ref_id": "BIBREF15" }, { "start": 527, "end": 545, "text": "Yang et al. (2019)", "ref_id": "BIBREF70" }, { "start": 725, "end": 742, "text": "(Gu et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrase", "sec_num": "3.1" }, { "text": "However, unlike paraphrasing a sentence or a literal-to-literal paraphrasing task, our proposed tasks are more constrained given the existence of idiomatic expressions. This renders the datasets used for the task of paraphrasing and the associated paraphrasing models inadequate for our task. Our dataset is created to fill this need to advance a fundamental understanding of idiomatic text generation and paraphrasing. Therefore, research into our tasks and dataset can also be used for paraphrasing when only part of the sentence need to be paraphrased or idioms need to be paraphrased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paraphrase", "sec_num": "3.1" }, { "text": "The task of style transfer can be defined as rewriting sentences into those with a target style. Recent research has primarily focused sentiment manipulation and changes in writing styles (Jhamtani et al., 2017; Gong et al., 2019) . Our proposed tasks are different from the nature of style transfer studies in recent works because (i) our tasks retain a large portion of the input sentences while style transfer may need to completely change the input sentences, and (ii) our tasks explore the nuance component of style, an aspect heretofore unexplored. To test different models' performance on style transfer, several nonparallel corpora have been used (Yelp (Shen et al., 2017) , Grammarly's Yahoo Answers Formality Corpus (Rao and Tetreault, 2018) , Amazon Food Review dataset (McAuley and Leskovec, 2013) and Product Review dataset (He and McAuley, 2016) ).", "cite_spans": [ { "start": 188, "end": 211, "text": "(Jhamtani et al., 2017;", "ref_id": "BIBREF22" }, { "start": 212, "end": 230, "text": "Gong et al., 2019)", "ref_id": "BIBREF13" }, { "start": 661, "end": 680, "text": "(Shen et al., 2017)", "ref_id": "BIBREF53" }, { "start": 726, "end": 751, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF45" }, { "start": 781, "end": 809, "text": "(McAuley and Leskovec, 2013)", "ref_id": "BIBREF36" }, { "start": 837, "end": 859, "text": "(He and McAuley, 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Style Transfer", "sec_num": "3.2" }, { "text": "Despite their size, they lack the focus on IEs and are all non-parallel. This has led to the the study of unsupervised methods for style transfer, including cross-aligned auto-encoder (Hu et al., 2017) , VAE (Hu et al., 2017) , Generative Adversarial Network (Zeng et al., 2020) , reinforcement learning for constraints in style transfer (Xu et al., 2018; Gong et al., 2019) and pipeline models Sudhakar et al., 2019) . Owing to the essential departure of our tasks from those of previously studied style transfer tasks, and the limitation of non-parallel corpus, we create our own parallel dataset which focuses on IEs.", "cite_spans": [ { "start": 184, "end": 201, "text": "(Hu et al., 2017)", "ref_id": "BIBREF18" }, { "start": 208, "end": 225, "text": "(Hu et al., 2017)", "ref_id": "BIBREF18" }, { "start": 259, "end": 278, "text": "(Zeng et al., 2020)", "ref_id": "BIBREF72" }, { "start": 338, "end": 355, "text": "(Xu et al., 2018;", "ref_id": "BIBREF67" }, { "start": 356, "end": 374, "text": "Gong et al., 2019)", "ref_id": "BIBREF13" }, { "start": 395, "end": 417, "text": "Sudhakar et al., 2019)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Style Transfer", "sec_num": "3.2" }, { "text": "Prior work on automated metaphor processing has primarily focused on their identification, interpretation and also generation. (Shutova, 2010b; Shutova et al., 2013; Abe et al., 2006) . Also, data for this task is extremely sparse: there are not any large scale parallel corpora containing literal and metaphoric paraphrases which aims for metaphor generation. The most useful one is that of (Mohammad et al., 2016). However, their dataset has a small number (171) of metaphoric sentences extracted from WordNet. Early works on metaphor generation mainly focus on phrase level metaphor and template-based generation (Terai and Nakagawa, 2010; Ovchinnikova et al., 2014) . Recent works also explore the power of neural networks (Mao et al., 2018; Yu and Wan, 2019; Stowe et al., 2020) . However, most of the research on metaphor generation suffer from the lack of parallel corpora.", "cite_spans": [ { "start": 127, "end": 143, "text": "(Shutova, 2010b;", "ref_id": "BIBREF55" }, { "start": 144, "end": 165, "text": "Shutova et al., 2013;", "ref_id": "BIBREF56" }, { "start": 166, "end": 183, "text": "Abe et al., 2006)", "ref_id": "BIBREF0" }, { "start": 616, "end": 642, "text": "(Terai and Nakagawa, 2010;", "ref_id": "BIBREF62" }, { "start": 643, "end": 669, "text": "Ovchinnikova et al., 2014)", "ref_id": "BIBREF41" }, { "start": 727, "end": 745, "text": "(Mao et al., 2018;", "ref_id": "BIBREF35" }, { "start": 746, "end": 763, "text": "Yu and Wan, 2019;", "ref_id": "BIBREF71" }, { "start": 764, "end": 783, "text": "Stowe et al., 2020)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Metaphoric Expression Generation", "sec_num": "3.3" }, { "text": "Our proposed tasks share some similarities with metaphor generation but also have differences. Instead of focusing on paraphrase of single word like most metaphor generation work, our tasks often require a mapping between two multi-word expressions, which makes our tasks more challenging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metaphoric Expression Generation", "sec_num": "3.3" }, { "text": "Text simplification aims to rewrite input sentences into lexically and/or syntactically simplified forms. The Simple Wikipedia Corpus (Zhu et al., 2010) and more recently, the Newsela dataset (Xu et al., 2015) and the WikiLarge dataset (Zhang and Lapata, 2017) dominate the research area. The use of different machine learning models have also been explored for this task, including statistical machine translation model (Wubben et al., 2012), the Seq2Seq architecture (Nisioi et al., 2017) and the Transformer architecture (Zhao et al., 2018) .", "cite_spans": [ { "start": 134, "end": 152, "text": "(Zhu et al., 2010)", "ref_id": "BIBREF77" }, { "start": 192, "end": 209, "text": "(Xu et al., 2015)", "ref_id": "BIBREF68" }, { "start": 236, "end": 260, "text": "(Zhang and Lapata, 2017)", "ref_id": "BIBREF73" }, { "start": 469, "end": 490, "text": "(Nisioi et al., 2017)", "ref_id": "BIBREF39" }, { "start": 524, "end": 543, "text": "(Zhao et al., 2018)", "ref_id": "BIBREF74" } ], "ref_spans": [], "eq_spans": [], "section": "Text Simplification", "sec_num": "3.4" }, { "text": "Departing from previous attempts at lexical or syntactic simplification, our proposed task of idiomatic sentence paraphrasing aims to simplify the nuance of non-compositional and figurative expressions thereby permitting a more literal understanding of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Simplification", "sec_num": "3.4" }, { "text": "We summarize the datasets of the related tasks in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Text Simplification", "sec_num": "3.4" }, { "text": "We describe the details of the data collection, data annotation, corpus analyses and comparisons with other existing corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building the Dataset", "sec_num": "4" }, { "text": "The Parallel Idiomatic Expression Corpus (PIE), consists of idiomatic expressions (IEs), their definitions, sentences containing the IEs and corresponding sentences where the IEs are replaced with their literal paraphrases. One instance of the dataset is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 272, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Data Collection", "sec_num": "4.1" }, { "text": "We collected a list of 1042 popular IEs and their meanings from an educational website 6 that has a broad coverage of frequently used IEs including phrasal verbs, idioms and proverbs. For a broad coverage of IEs we did not limit them to a specific syntactic category. The list was then split between the members of the research team consisting of a native English speaker, and three near-native English speakers. Some IEs such as \"tick off\" ( Figure 2 ) have multiple senses. The annotators labeled the sense of IEs in given sentences according to the sense information from reliable sources including the Oxford English Dictionary 7 , the Webster Dictionary 8 and the Longman Dictionary of Contemporary English 9 . IEs that were not available in any of the popular dictionaries were excluded from dataset as were proverbs that are independent clauses (e.g., the pen is mightier than the sword).", "cite_spans": [], "ref_spans": [ { "start": 443, "end": 452, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Data Collection", "sec_num": "4.1" }, { "text": "To guarantee each sense is well represented, the annotators collected at least 5 sentences for each sense of an IE from online sources (e.g., the Contemporary corpus of American English, and examples listed in dictionaries).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "4.1" }, { "text": "The data collection step yielded the corpus with a total of 823 IEs and 5170 sentence-pairs using these IEs (an average of 6.3 sentence-pairs per id- iom). We also note that every instance (idiomaticliteral pair) is only one sentence long. The corpus statistics are summarized in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data Collection", "sec_num": "4.1" }, { "text": "In order to create the parallel dataset of idiomatic and literal sentences for the proposed tasks, a native English speaker was asked to rewrite each idiomatic sentence into its literal form, where the IE was replaced by a literal phrase. As part of this manual paraphrasing, the annotator was asked to paraphrase only the IE so as not to alter its meaning in the context of the sentence, preserving the phrases syntactic function and to conform to the sense definition. The rest of the sentence was to be left unchanged. The annotator is free to use original sense definition when rewriting or use paraphrases of sense definition. After the first annotation pass, the researchers checked the literal sentences generated by the first annotator and corrected any errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation", "sec_num": "4.2" }, { "text": "To specify the span of the IE in each idiomatic sentence and that of the literal paraphrase in the corresponding literal sentence, BIO labels were used; B marks the beginning of the idiom expressions (resp. the literal paraphrases), I the other words in the IE (resp. words in the literal paraphrases) and O all the other words in the sentences. This labeling was done automatically considering that the only difference between a given idiomatic sentence and its literal sentence is the replacement of idiom with literal phrase. An example of the BIO labeled sentence pair is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 585, "end": 593, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Data Annotation", "sec_num": "4.2" }, { "text": "We summarize the statistics of our PIE dataset in Table 2 and compare it with existing datasets in Table 1 . We notice that the parallel sentences in our dataset are comparable in terms of sentence length, while simple sentences are much shorter in the text simplification dataset. This suggests that the tasks we propose may not result in significantly shorter sentences compared to their inputs, and this constitutes a core departure from the task of text simplification. Moreover, the sentences in our dataset are longer on an average compared to the sentences in existing datasets (with the exception of text simplification data). This can pose challenges to the text generation model performing the tasks proposed in the paper.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 99, "end": 106, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpus Analyses", "sec_num": "4.3" }, { "text": "We also report the percentage of n-grams in the literal sentences which do not appear in the idiomatic sentences as a measure of the difference between the idiomatic and literal sentences. As shown in Table 3 , there is smaller variation between the source sentences and the target sentences in our dataset. This is again due to the nature of our task, which calls for a local paraphrasing (rewriting only a part of the sentence).", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Corpus Analyses", "sec_num": "4.3" }, { "text": "We note that IEs may be naturally ambiguous due to the existence of both figurative and literal senses, as also pointed out in previous works. A small portion of IEs in our dataset have multiple senses, and one example is \"tick off \" in Figure 2 . Table 4 presents the distribution of the senses in the IEs in our dataset, and the average number of senses is 1.05, suggesting that the majority IEs in our dataset are monosemous.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 248, "end": 255, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Corpus Analyses", "sec_num": "4.3" }, { "text": "Noting that the idiomatic to literal sentences were manually created, the quality of our dataset may be called into question. We point out that in an effort to quickly use sentences of good quality and in line with existing datasets for related tasks with idiomatic expressions (Haagsma et al., 2020; Korkontzelos et al., 2013) we collected idiomatic expressions in the wild. However, as acknowledged by previous dataset creation efforts, not all IEs oc- Table 3 : The percentage of n-grams in source sentences which do not appear in the target sentences. In our case, it is the percentage of n-grams in literal sentences which do not appear in the idiomatic sentences.", "cite_spans": [ { "start": 278, "end": 300, "text": "(Haagsma et al., 2020;", "ref_id": "BIBREF16" }, { "start": 301, "end": 327, "text": "Korkontzelos et al., 2013)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 455, "end": 462, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Dataset quality", "sec_num": "4.4" }, { "text": "cur equally frequently, which can result in a representation bias. In addition, finding true paraphrases of IEs in the wild is hard. In light of these practical data-related concerns, we resorted to a manual paraphrasing of the IEs as a trade-off between naturalness and representation. This idea of using non-natural instances is also influenced by successful recent approaches to training data collection and data augmentation using synthetic methods reported in severely resource-constrained domains such as machine translation (Sennrich et al., 2016) and clinical language processing (Ive et al., 2020) .", "cite_spans": [ { "start": 531, "end": 554, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF51" }, { "start": 588, "end": 606, "text": "(Ive et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset quality", "sec_num": "4.4" }, { "text": "Translation Models: Considering that our tasks of idiomatic sentence generation and paraphrasing have never been studied before and the fact that they are both text generation tasks, we first choose some basic end-to-end models which have shown state-of-the-art performance on other text generation tasks. Accordingly, we used the LSTMbased Seq2Seq model (Sutskever et al., 2014) and the transformer architecture (Vaswani et al., 2017 ). These will be alluded to as Translation Models. Copy Models: Because the idiomatic sentences and their literal counterparts have identical context words, we consider the context to remain unchanged during generation. This prompts the use of the copy-enriched seq2seq model (Jhamtani et al., 2017 ) and the transformer model with a copy mechanism (Gehrmann et al., 2018) 10 (hereafter collectively called Copy Models). BART: Considering the similarity between our tasks and paraphrasing, we also choose the pretrained BART (Lewis et al., 2019) , successfully used for text simplification and paraphrasing. We fine-tuned it on our training instances.", "cite_spans": [ { "start": 355, "end": 379, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF61" }, { "start": 413, "end": 434, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF63" }, { "start": 711, "end": 733, "text": "(Jhamtani et al., 2017", "ref_id": "BIBREF22" }, { "start": 784, "end": 807, "text": "(Gehrmann et al., 2018)", "ref_id": "BIBREF11" }, { "start": 960, "end": 980, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "Finally, we used a sequential model inspired by the retrieve-delete-generate pipeline (Sudhakar et al., 2019; Zhou et al., 2021) that showed a competitive performance for style transfer. We note that novel instances of idiomatic sentences cannot be generated without previously encountering the IE. Considering this, we set up the pipeline model with a retrieval stage to retrieve an IE for a given literal sentence (resp. the correct sense given an idiomatic sentence). Toward this, a RoBERTa model for sentence classification was fine-tuned on our training data. The concatenation of the input sentence and the correct idiom or sense is considered as a positive instance and that of the input sentence and an irrelevant idiom or a different sense is considered a negative instance. Given all the concatenations of the input sentence and the idioms in our dataset, this stage aims to classify the correct one. In the deletion stage, we deleted the literal phrase that should have been replaced by the retrieved idioms (resp. deleted the IE in the given idiomatic sentence). Again, a RoBERTa model for sequence classification was fine-tuned on our training data with BIO labels. This stage aims to assign one of the BIO labels for each token in the input sentence and delete the tokens with labels of B and I. In the generating stage, we combined the results from the retrieval and deletion stages and use a finetuned BART model to generate final output-the literal sentences for the task of idiomatic sentence paraphrasing and idiomatic sentences for the task of idiomatic sentence generation.", "cite_spans": [ { "start": 86, "end": 109, "text": "(Sudhakar et al., 2019;", "ref_id": "BIBREF60" }, { "start": 110, "end": 128, "text": "Zhou et al., 2021)", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "Pipeline Model:", "sec_num": null }, { "text": "For all the models, the maximum sentence length was set to 128. The batch size and base learning rates were set to 32 and 5e \u2212 5 respectively. These models were all trained and run on the Google Colab platform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "For the translation models and copy models, the dimension of the hidden state vectors was set to 256 and the dimension of the word embeddings to 256. These baselines were trained with the parallel sentence pairs as appropriate, i.e., taking literal sentences as input and generating the corresponding idiomatic sentences or vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "The baseline pretrained BART model was trained for 5 epochs and during inference a beam search with 5 beams was used with top-k set to 100 and top-p set to 0.5. The other hyper-parameters were set to their default values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "All the RoBerta and BART models in the pipeline model were trained for 5 epochs. For the BART model, during inference, we used a beam search with 5 beams with top-k set to 100 and top-p set to 0.5. The other hyper-parameters were set to their default values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "For automatic evaluation, Rouge (Lin, 2004) , BLEU (Papineni et al., 2002) , METEOR (Lavie and Agarwal, 2007) and SARI (Xu et al., 2016) are used to compare the similarity between the generated sentences and the references. These metrics has been widely used in various text generation tasks such as paraphrasing, style transfer and text simplification. To measure linguistic quality, we use a pre-trained language model BERT to calculate perplexity scores and a recently proposed measure, GRUEN (Zhu and Bhat, 2020) .", "cite_spans": [ { "start": 32, "end": 43, "text": "(Lin, 2004)", "ref_id": "BIBREF32" }, { "start": 51, "end": 74, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF42" }, { "start": 84, "end": 109, "text": "(Lavie and Agarwal, 2007)", "ref_id": "BIBREF29" }, { "start": 119, "end": 136, "text": "(Xu et al., 2016)", "ref_id": "BIBREF69" }, { "start": 496, "end": 516, "text": "(Zhu and Bhat, 2020)", "ref_id": "BIBREF76" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "Considering that automatic evaluation cannot fully analyze the results, we use human evaluation as a complement to the automatic evaluation metrics. For each task, We randomly sampled 100 input sentences and the corresponding outputs of all baselines. Human annotations were collected with respect to context, style and fluency of generated sentences based on the following criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "(1) Context preservation measures how well the context surrounding the idiomatic/literal phrase is preserved in the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "(2) Target inclusion checks whether the correct IE or literal phrase is used in the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "(3) Fluency evaluates the fluency and readability of the output sentence including how appropriately the verb tense, noun and pronoun forms are used. (4) Overall meaning evaluates the overall quality of the output sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "For each output sentence, two annotators with native-speaker-level English proficiency were asked to rate it on a scale from 1 to 6 in terms of the context preservation, fluency and overall meaning. Higher scores indicate better quality. As for the target inclusion, they were asked to rate it on a scale from 1 to 3. Score 1 denotes that the target phrase is not included in the input at all, 2 denotes partial inclusion, and 3 is for the complete inclusion. We report the average score over all samples for each baseline in each aspect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.3" }, { "text": "Results. We report the automatic and human evaluation results in Table 5 and 6. More detailed results with all the metrics considered are in the appendix. On both tasks, going by the automatic metrics, copy-enriched transformer, pretrained BART model and the pipeline model perform better than other baselines. Pretrained BART achieved the best performance in BLEU and GRUEN, and the pipeline model does best in SARI. As for human evaluation, BART and the pipeline again achieve the best performance among the baselines. While BART is the best in preserving contexts and achieving fluency, the pipeline is the best in idiom paraphrasing and generation. The overall agreement score for human evaluation is 0.76. Model competence. BART and the pipeline model outperform other baselines in that they leverage auxiliary information (large pretaining corpora and selective idiomatic expression information, respectively) which is not available to the other models. The benefit of the copy mechanism by explicitly retaining the contexts as required by our tasks, is Table 5 : Automatic evaluation results for the task of idiomatic sentence generation (s2i) and idiomatic sentence paraphrasing (i2s).", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 5", "ref_id": null }, { "start": 1060, "end": 1067, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Model Context Target Fluency Overall s2i i2s s2i i2s s2i i2s s2i i2s Seq2Seq", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "1.3 1.2 1.1 1.1 1.1 1.0 1.7 1.7 Seq2Seq with copy 3.8 3.8 1.6 1.7 2.1 3.4 3.5 3.6 Transformer 4.2 4.3 1.3 1.2 3.3 3.4 3.4 3.3 Transformer with copy 5.4 5.3 1.2 1.6 4.6 4.6 3.9 4.2 Pretrained BART 5.9 5.9 1.5 2.1 5.9 5.9 4.4 5.0 Pipeline 5.6 5.8 1.7 2.2 5.1 5.3 4.5 5.1 Table 6 : Human evaluation results for the two tasks.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "shown in the corresponding gains in automatic and manual evaluation scores for both Seq2Seq and transformer models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "When it comes to the comparison between BART and the pipeline, BART does better in retaining the contexts surrounding idiomatic expressions given its high context score in human evaluation while the pipeline is better at handling the idiomatic part, i.e., target inclusion. Despite the reported superior performance of BART in related text generation tasks (Lewis et al., 2019) , our experiments show that BART has limited capability in idiom paraphrasing and generation. The pipeline method, by virtue of error propagation from its retrieval and deletion modules suffers in terms of both the context preservation and fluency. For task of idiomatic sentence generation, the accuracy for retrieval module is 0.27 and F1 score for deletion module is 0.68. For task of idiomatic sentence paraphrasing, the accuracy for retrieval module is 0.96 and F1 score for deletion module is 0.85. Comparison between two tasks. According to human evaluation results in Table 6 , both BART and the pipeline received higher scores for idiomatic sentence paraphrasing than idiomatic sentence generation, suggesting that paraphrasing is relatively easier among the two tasks. This resonates with our intuitions as language users in that given a lexical resource, paraphrasing an IE is easier than finding the right IE to replace a phrase. Limitation of automatic metrics. Table 7 presents the correlation between automatic metrics and human judgements. All the correlation scores between automatic metrics and human evaluate scores are not high enough. For BLEU and SARI which mainly measure overlapping tokens, some synonymous idioms or literal phrases are ignored while they are still appropriate. For GRUEN metric aiming to measure text quality, its correlation scores with fluency and overall meaning are quite low. Therefore, more reliable automatic evaluation methods are needed. Error analysis. For task of idiomatic sentence generation, the primary challenge is in identifying the appropriate IE, which is the hardest when the IE is highly non-compositional (e.g., bird of passage in Table 11 ). The examples are presented in Table 11 in the Appendix. For the task of idiomatic sentence paraphrasing, one challenge is the difficulty of choosing the correct sense of the idiom. As is shown in Table 12 in Appendix, all the baseline models were unable to generate the correct literal phrases for \"alpha and omega\", which have two senses: the beginning and the end; the principal element. Also, we noticed that strong baseline models of pretrained BART and the pipeline model tend to use a short but inaccurate literal phrase when the correct one is long. Paraphrasing of \"the bird of passage\" in Table 12 is an example. Applications: Research in the proposed tasks has many potential practical applications. 1) An idiomatic sentence paraphrasing tool would be of importance in several language processing settings encountered by humans and machines. The nonliteral and stylized meaning of multi-word expressions (MWE) in general and idioms in particular, pose two broad kinds of challenges. First, they affect readability in target populations. For in- Table 7 : Instance-level Spearman's correlations between human and automatic evaluation for pretrained BART.", "cite_spans": [ { "start": 357, "end": 377, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 954, "end": 961, "text": "Table 6", "ref_id": null }, { "start": 1353, "end": 1360, "text": "Table 7", "ref_id": null }, { "start": 2073, "end": 2081, "text": "Table 11", "ref_id": "TABREF1" }, { "start": 2115, "end": 2124, "text": "Table 11", "ref_id": "TABREF1" }, { "start": 2282, "end": 2290, "text": "Table 12", "ref_id": "TABREF1" }, { "start": 2684, "end": 2692, "text": "Table 12", "ref_id": "TABREF1" }, { "start": 3141, "end": 3148, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Literal sentence You can't delay making a decision any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "You can't sit on the fence any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Idiomatic sentence", "sec_num": null }, { "text": "You can't be in the obsession any night , you need to make up your plans Transformer you can't delay making a decision of any longer , you need to make your mind your mind . Seq2Seq-copy you can't sit sit the fence any , , you need to to up your . Transformer-copy you can't delay making a decision any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "You can't delay making a decisione any longer, you make your mind. BART You can't delay making a decision any longer, you need to make up your own mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "You can't wait on the money any rival , you need to make up your energy . Transformer you can't sit on the ? any longer , you need to make up your mind . Seq2Seq-copy you can't delay making any any any , you need to make your your mind . Transformer-copy you can't sit on the troublesome any longer , you need to make your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I2S Seq2Seq", "sec_num": null }, { "text": "You can't stay on the fence any longer, you need to make up your mind. BART You can't be indecisive any longer, you need to make up your mind. Table 8 : A sample of generated idiomatic sentences. Text in bold and italics red represents the idiomatic expressions correctly included in the outputs, text in bold blue represents the literal counterparts in the input sentences and text in underlined olive represents the idioms or literal phrases that are poorly generated.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "stance, despite their intact structural language competence, individuals with Asperger syndrome and more broadly those with autism spectrum disorder are known to experience significant challenges understanding figurative language (idioms) in their native language (Kalandadze et al., 2018) . It is also widely acknowledged that idiomatic expressions are some of the hardest aspects of language acquisition and processing for second language learners (Liontas, 2002; Ellis et al., 2008; Canut et al., 2020) . Moreover, natural language processing systems are known to be negatively impacted by idioms in text ( (Salton et al., 2014; Shao et al.; Shutova et al., 2013) shown the negative impact of idioms and metaphors on machine translation leading to awkward or incorrect translations from English to other languages). Fruitful results of this task can lead to a system capable of recognizing and interpreting IEs in unrestricted text in a central component of any real-world NLP application (e.g., information retrieval, machine translation, question answering, information extraction, and opinion mining).2) A realistic application of the idiomatic sentence generation task would be for computer-aided style checking, where a post-processing tool could suggest a list of idioms to replace a literal phrase in a sentence. 3) True integration with an external NLP application would require combining the first step of IE identification followed by paraphrasing as done in (Shutova et al., 2013) , which will require a combination of the paraphrasing with identification, and can be a future direction for research.", "cite_spans": [ { "start": 264, "end": 289, "text": "(Kalandadze et al., 2018)", "ref_id": "BIBREF24" }, { "start": 450, "end": 465, "text": "(Liontas, 2002;", "ref_id": "BIBREF33" }, { "start": 466, "end": 485, "text": "Ellis et al., 2008;", "ref_id": "BIBREF8" }, { "start": 486, "end": 505, "text": "Canut et al., 2020)", "ref_id": "BIBREF5" }, { "start": 610, "end": 631, "text": "(Salton et al., 2014;", "ref_id": "BIBREF48" }, { "start": 632, "end": 644, "text": "Shao et al.;", "ref_id": "BIBREF52" }, { "start": 645, "end": 666, "text": "Shutova et al., 2013)", "ref_id": "BIBREF56" }, { "start": 1472, "end": 1494, "text": "(Shutova et al., 2013)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "To conclude, in this paper, we proposed two new tasks: idiomatic sentence generation and paraphrasing. We also presented PIE, the first parallel idiom corpus. We benchmark existing end-to-end trained neural network models and a pipeline method on PIE and analyze their performance for our tasks. Our experiments and analyses reveal the competence and shortcomings of available methods, underscoring the need for continued research on processing idiomatic expressions. Future work should explore possibilities for improving performance through more extensive exploration of richer model architectures and using more reliable evaluation methods. Literal sentence Joe , being one who is here today and gone tomorrow , stayed the night , had some rest and ate some food and left early the next morning .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Joe , being the bird of passage he is , stayed the night , had some rest and ate some food and left early the next morning .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "First , being one , and putting the project going to be joined the ones , had some ice row and creating some people and creating some expensive of both the time .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "Transformer joe , being one who is here today and gone tomorrow , kept the night , had some rest and punched some food a great early . Seq2Seq with copy joe , being the bird of he he , , , , , , , some some some some and and and and the .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "Transformer with copy joe , being one who is here today and gone tomorrow , stayed the night , had a rest and ate food left the next early .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "Pretrained BART Joe, being one who is here today and gone tomorrow, stayed the night, had some rest and ate some food and left early the next morning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "Pipeline cool heels joe, being one who is here today and gone tomorrow, stayed the night, and ate some food and left early the next morning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "My life starts from you and ends at you , so you are my first and my last .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute multiple meaning Literal sentence", "sec_num": null }, { "text": "My life starts from you and ends at you , so you are my alpha and omega .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "My friend from you and offensive , and yet you are my dream and my loved . Pretrained BART My life starts from you and ends at you , so you are my first and my last.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "Close the books, so you are my my first and my last.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "You can't delay making a decision any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute high non-compositionality Literal sentence", "sec_num": null }, { "text": "You can't sit on the fence any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "You can't be in the obsession any night , you need to make up your plans . Transformer you can't delay making a decision of any longer , you need to make your mind your mind . Seq2Seq with copy you can't sit sit the fence any , , you need to to up your . Transformer with copy you can't delay making a decision any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "Pretrained BART You can't delay making a decision any longer, you need to make up your own mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "You can't delay making a decisione any longer, you make your mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "Finding the ruins of Babylon was the archaeologist 's greatest find .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute low non-compositionality Literal sentence", "sec_num": null }, { "text": "Finding the ruins of Babylon was the archaeologist 's treasure trove .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "Missing the aftermath of pouring down the cake 's share of the city . Transformer catching up with silver lining of the challenges 's volatility .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "finding the ruins of unk was the 's 's trove . Transformer with copy finding the ruins of babylon was the archaeologist 's greatest silver spoons .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq with copy", "sec_num": null }, { "text": "Pretrained BART Finding the ruins of Babylon was the archaeologist's greatest find.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq with copy", "sec_num": null }, { "text": "Finding the ruins of babylon was the archaeologist' treasure trove. Table 11 : Samples of generated idiomatic sentences. Text in blue represents the idiomatic expressions correctly included in the outputs; text in red represents the literal counterparts in the input sentences. text in green represents the idioms that are poorly generated.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 76, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "The parallel corpus is available at https://github. com/zhjjn/MWE_PIE.git", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://translate.google.com/. Accessed November 19, 2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The consideration of whether idioms are semantic-or pragmatic-or discourse-level phenomena is important, but beyond the scope of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.microsoft.com/enus/download/details.aspx?id=523985 https://www.kaggle.com/aymenmouelhi/quoraduplicate-questions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.theidioms.com 7 https://www.oxfordlearnersdictionaries.com 8 https://www.merriam-webster.com 9 https://www.ldoceonline.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/lipiji/TranSummar", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We provide more detailed autometic evaluation results in Table 9 and 10.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "A.1 Detailed Evaluation Results", "sec_num": null }, { "text": "We provide examples generated by all models on idiomatic sentence generation and transfer tasks in Table 11 and 12 respectively.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Table 11", "ref_id": null } ], "eq_spans": [], "section": "A.2 Generated Examples", "sec_num": null }, { "text": "high non-compositionality Idiomatic sentence Joe , being the bird of passage he is , stayed the night , had some rest and ate some food and left early the next morning .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute", "sec_num": null }, { "text": "Joe , being one who is here today and gone tomorrow , stayed the night , had some rest and ate some food and left early the next morning .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "And , sitting the part of the Bieber he is , seemed the morning , he some smart and wound problems so well and gives early at the next morning .Transformer joe , being the guards of nowhere he is , the night the night , and had some dealers and left the morning left the next morning .Seq2Seq with copy joe , being one who here today and tomorrow tomorrow stayed stayed night , had some and and and and and left next next next .Transformer with copy joe , being the bird of energy is stayed , stayed the night , some rest and ate ate some food left the next morning .Pretrained BART Joe, being the traveler he is, stayed the night, had some rest and ate some food and left early the next morning.Pipeline joe, being the person he is, stayed the night, had some rest and ate some food and left early the next morning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "My life starts from you and ends with you , so you are my alpha and omega .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute multiple meaning Idiomatic sentence", "sec_num": null }, { "text": "My life starts from you and ends with you , so you are my first and my last .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "My life dreams from you and read your family at you , so you are . You can't sit on the fence any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "You can't delay making a decision any longer , you need to make up your mind .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "You can't wait on the money any rival , you need to make up your energy . Transformer you can't sit on the ? any longer , you need to make up your mind . Seq2Seq with copy you can't delay making any any any , you need to make your your mind . Transformer with copy you ca n't sit on the troublesome any longer , you need to make your mind .Pretrained BART You can't be indecisive any longer, you need to make up your mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "You can't stay on the fence any longer, you need to make up your mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pipeline", "sec_num": null }, { "text": "Finding the ruins of Babylon was the archaeologist 's treasure trove .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute low non-compositionality Idiomatic sentence", "sec_num": null }, { "text": "Finding the ruins of Babylon was the archaeologist 's greatest find .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference", "sec_num": null }, { "text": "Edward the trap of nature was the racial out of Robert . Transformer finding and hide of confiement was shocking 's legal code .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq", "sec_num": null }, { "text": "finding the ruins of unk was the unk 's greatest find . Transformer with copy finding the ruins of babylon was the archaeologist's family members .Pretrained BART Finding the ruins of Babylon was the archaeologist's greatest find.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Seq2Seq with copy", "sec_num": null }, { "text": "Finding the ruins of babylon was the archaeologist's trove. Table 12 : Samples of generated literal sentences. Text in red represents the appropriate literal phrases included in the outputs. Text in blue represents the idioms in the input sentences. Text in green represents the literal phrases that are poorly generated.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 68, "text": "Table 12", "ref_id": null } ], "eq_spans": [], "section": "Pipeline", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A computational model of the metaphor generation process", "authors": [ { "first": "Keiga", "middle": [], "last": "Abe", "suffix": "" }, { "first": "Kayo", "middle": [], "last": "Sakamoto", "suffix": "" }, { "first": "Masanori", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Annual Meeting of the Cognitive Science Society", "volume": "28", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keiga Abe, Kayo Sakamoto, and Masanori Nakagawa. 2006. A computational model of the metaphor gen- eration process. In Proceedings of the Annual Meet- ing of the Cognitive Science Society, volume 28.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multitask models for controlling the complexity of neural machine translation", "authors": [ { "first": "Sweta", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the The Fourth Widening Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "136--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sweta Agrawal and Marine Carpuat. 2020. Multitask models for controlling the complexity of neural ma- chine translation. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, pages 136-139.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "other words: A coursebook on translation. Routledge", "authors": [ { "first": "Mona", "middle": [], "last": "Baker", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mona Baker. 2018. In other words: A coursebook on translation. Routledge.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Oxford handbook of regulation", "authors": [ { "first": "Robert", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Cave", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Lodge", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Baldwin, Martin Cave, and Martin Lodge. 2010. The Oxford handbook of regulation. Oxford univer- sity press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Longman grammar of written and spoken english", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" }, { "first": "Stig", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Conrad", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Finegan", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Biber, Stig Johansson, Geoffrey Leech, Su- san Conrad, and Edward Finegan. 1999. Longman grammar of written and spoken english. Harlow: Longman.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Vous avez dit falc? pour une adaptation linguistique des textes destin\u00e9s aux migrants nouvellement arriv\u00e9s", "authors": [ { "first": "Emmanuelle", "middle": [], "last": "Canut", "suffix": "" }, { "first": "Juliette", "middle": [], "last": "Delahaie", "suffix": "" }, { "first": "Magali", "middle": [], "last": "Husianycia", "suffix": "" } ], "year": 2020, "venue": "Langage et societe", "volume": "", "issue": "3", "pages": "171--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuelle Canut, Juliette Delahaie, and Magali Hu- sianycia. 2020. Vous avez dit falc? pour une adap- tation linguistique des textes destin\u00e9s aux migrants nouvellement arriv\u00e9s. Langage et societe, (3):171- 201.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multiword expression processing: A survey", "authors": [ { "first": "Mathieu", "middle": [], "last": "Constant", "suffix": "" }, { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "Monti", "suffix": "" }, { "first": "Lonneke", "middle": [], "last": "Van Der", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Plas", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "Amalia", "middle": [], "last": "Rosner", "suffix": "" }, { "first": "", "middle": [], "last": "Todirascu", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "4", "pages": "837--892", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathieu Constant, G\u00fcl\u015fen Eryigit, Johanna Monti, Lonneke Van Der Plas, Carlos Ramisch, Michael Rosner, and Amalia Todirascu. 2017. Multiword ex- pression processing: A survey. Computational Lin- guistics, 43(4):837-892.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The vnc-tokens dataset", "authors": [ { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the LREC Workshop Towards a Shared Task for Multiword Expressions", "volume": "", "issue": "", "pages": "19--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2008. The vnc-tokens dataset. In Proceedings of the LREC Workshop Towards a Shared Task for Mul- tiword Expressions (MWE 2008), pages 19-22.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Formulaic language in native and second language speakers: Psycholinguistics, corpus linguistics, and tesol", "authors": [ { "first": "C", "middle": [], "last": "Nick", "suffix": "" }, { "first": "Rita", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "Carson", "middle": [], "last": "Simpson-Vlach", "suffix": "" } ], "year": 2008, "venue": "Tesol Quarterly", "volume": "42", "issue": "3", "pages": "375--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nick C Ellis, RITA Simpson-Vlach, and Carson May- nard. 2008. Formulaic language in native and sec- ond language speakers: Psycholinguistics, corpus linguistics, and tesol. Tesol Quarterly, 42(3):375- 396.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised type and token identification of idiomatic expressions", "authors": [ { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "1", "pages": "61--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Ppdb: The paraphrase database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bottom-up abstractive summarization", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.10792" ] }, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. arXiv preprint arXiv:1808.10792.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Sarcasm analysis using conversation context", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Alexander R Fabbri", "suffix": "" }, { "first": "", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "4", "pages": "755--792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversa- tion context. Computational Linguistics, 44(4):755- 792.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Reinforcement learning based text style transfer without parallel training corpus", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Lingfei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jinjun", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Wen-Mei", "middle": [], "last": "Hwu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3168--3180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, and Wen-mei Hwu. 2019. Reinforcement learning based text style transfer without parallel training corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 3168-3180.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extract, transform and filling: A pipeline model for question paraphrasing based on template", "authors": [ { "first": "Yunfan", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Zhongyu", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)", "volume": "", "issue": "", "pages": "109--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yunfan Gu, Zhongyu Wei, et al. 2019. Extract, trans- form and filling: A pipeline model for question paraphrasing based on template. In Proceedings of the 5th Workshop on Noisy User-generated Text (W- NUT 2019), pages 109-114.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A deep generative framework for paraphrase generation", "authors": [ { "first": "Ankush", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Prawaan", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Rai", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Thirty-Second AAAI Con- ference on Artificial Intelligence.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Magpie: A large corpus of potentially idiomatic expressions", "authors": [ { "first": "Hessel", "middle": [], "last": "Haagsma", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "279--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hessel Haagsma, Johan Bos, and Malvina Nissim. 2020. Magpie: A large corpus of potentially id- iomatic expressions. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 279-287.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering", "authors": [ { "first": "Ruining", "middle": [], "last": "He", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" } ], "year": 2016, "venue": "proceedings of the 25th international conference on world wide web", "volume": "", "issue": "", "pages": "507--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507-517.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Toward controlled generation of text", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1587--1596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR. org.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Generation and evaluation of artificial mental health records for natural language processing", "authors": [ { "first": "Julia", "middle": [], "last": "Ive", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Viani", "suffix": "" }, { "first": "Joyce", "middle": [], "last": "Kam", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Somain", "middle": [], "last": "Verma", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Puntis", "suffix": "" }, { "first": "Rudolf", "middle": [ "N" ], "last": "Cardinal", "suffix": "" }, { "first": "Angus", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Sumithra", "middle": [], "last": "Velupillai", "suffix": "" } ], "year": 2020, "venue": "NPJ Digital Medicine", "volume": "3", "issue": "1", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Ive, Natalia Viani, Joyce Kam, Lucia Yin, Somain Verma, Stephen Puntis, Rudolf N Cardinal, Angus Roberts, Robert Stewart, and Sumithra Velupillai. 2020. Generation and evaluation of artificial mental health records for natural language processing. NPJ Digital Medicine, 3(1):1-9.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Adversarial example generation with syntactically controlled paraphrase networks", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1875--1885", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The boundaries of the lexicon. Idioms: Structural and psychological perspectives", "authors": [ { "first": "Ray", "middle": [], "last": "Jackendoff", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "133--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ray Jackendoff. 1995. The boundaries of the lexicon. Idioms: Structural and psychological perspectives, pages 133-165.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Shakespearizing modern language using copy-enriched sequence to sequence models", "authors": [ { "first": "Harsh", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Workshop on Stylistic Variation", "volume": "", "issue": "", "pages": "10--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence to sequence models. In Proceedings of the Workshop on Stylistic Varia- tion, pages 10-19.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic sarcasm detection: A survey", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Mark", "middle": [ "J" ], "last": "Car", "suffix": "" } ], "year": 2017, "venue": "ACM Computing Surveys (CSUR)", "volume": "50", "issue": "5", "pages": "1--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J Car- man. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5):1-22.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Figurative language comprehension in individuals with autism spectrum disorder: A meta-analytic review", "authors": [ { "first": "Tamar", "middle": [], "last": "Kalandadze", "suffix": "" }, { "first": "Courtenay", "middle": [], "last": "Norbury", "suffix": "" }, { "first": "Terje", "middle": [], "last": "Naerland", "suffix": "" }, { "first": "Kari-Anne B", "middle": [], "last": "Naess", "suffix": "" } ], "year": 2018, "venue": "Autism", "volume": "22", "issue": "2", "pages": "99--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tamar Kalandadze, Courtenay Norbury, Terje Naerland, and Kari-Anne B Naess. 2018. Figurative language comprehension in individuals with autism spectrum disorder: A meta-analytic review. Autism, 22(2):99- 117.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Idioms in the classroom: An investigation of language unit and mainstream teachers' use of idioms", "authors": [ { "first": "Debra", "middle": [], "last": "Kerbel", "suffix": "" }, { "first": "Pam", "middle": [], "last": "Grunwell", "suffix": "" } ], "year": 1997, "venue": "Child Language Teaching and Therapy", "volume": "13", "issue": "2", "pages": "113--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debra Kerbel and Pam Grunwell. 1997. Idioms in the classroom: An investigation of language unit and mainstream teachers' use of idioms. Child Lan- guage Teaching and Therapy, 13(2):113-123.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Text simplification for informationseeking applications", "authors": [ { "first": "Kevin", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Knight", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "OTM Confederated International Conferences\" On the Move to Meaningful Internet Systems", "volume": "", "issue": "", "pages": "735--747", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Kevin Knight, and Daniel Marcu. 2004. Text simplification for information- seeking applications. In OTM Confederated Inter- national Conferences\" On the Move to Meaningful Internet Systems\", pages 735-747. Springer.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semeval-2013 task 5: Evaluating phrasal semantics", "authors": [ { "first": "Ioannis", "middle": [], "last": "Korkontzelos", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Fabio", "middle": [ "Massimo" ], "last": "Zanzotto", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", "volume": "2", "issue": "", "pages": "39--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ioannis Korkontzelos, Torsten Zesch, Fabio Massimo Zanzotto, and Chris Biemann. 2013. Semeval-2013 task 5: Evaluating phrasal semantics. In Second Joint Conference on Lexical and Computational Se- mantics (* SEM), Volume 2: Proceedings of the Sev- enth International Workshop on Semantic Evalua- tion (SemEval 2013), pages 39-47.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A continuously growing dataset of sentential paraphrases", "authors": [ { "first": "Wuwei", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Siyu", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.00391" ] }, "num": null, "urls": [], "raw_text": "Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential para- phrases. arXiv preprint arXiv:1708.00391.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Abhaya", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the second workshop on statistical machine translation", "volume": "", "issue": "", "pages": "228--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the second workshop on statistical machine translation, pages 228-231.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1865--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Context and idiom understanding in second languages", "authors": [ { "first": "John", "middle": [], "last": "Liontas", "suffix": "" } ], "year": 2002, "venue": "EUROSLA yearbook", "volume": "2", "issue": "1", "pages": "155--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Liontas. 2002. Context and idiom understand- ing in second languages. EUROSLA yearbook, 2(1):155-185.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A generalized idiom usage recognition model based on semantic compatibility", "authors": [ { "first": "Changsheng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6738--6745", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changsheng Liu and Rebecca Hwa. 2019. A general- ized idiom usage recognition model based on seman- tic compatibility. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 6738-6745.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Word embedding and wordnet based metaphor identification and interpretation", "authors": [ { "first": "Rui", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Chenghua", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Guerin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1222--1231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Mao, Chenghua Lin, and Frank Guerin. 2018. Word embedding and wordnet based metaphor iden- tification and interpretation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1222-1231.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews", "authors": [ { "first": "Julian John Mcauley", "middle": [], "last": "", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd international conference on World Wide Web", "volume": "", "issue": "", "pages": "897--908", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian John McAuley and Jure Leskovec. 2013. From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. In Proceed- ings of the 22nd international conference on World Wide Web, pages 897-908.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Metaphor as a medium for emotion: An empirical study", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "23--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An em- pirical study. In Proceedings of the Fifth Joint Con- ference on Lexical and Computational Semantics, pages 23-33.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Identification of nonliteral language in social media: A case study on sarcasm", "authors": [ { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Gonzalez-Ibanez", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" } ], "year": 2016, "venue": "Journal of the Association for Information Science and Technology", "volume": "67", "issue": "11", "pages": "2725--2737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smaranda Muresan, Roberto Gonzalez-Ibanez, Deban- jan Ghosh, and Nina Wacholder. 2016. Identifica- tion of nonliteral language in social media: A case study on sarcasm. Journal of the Association for Information Science and Technology, 67(11):2725- 2737.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Exploring neural text simplification models", "authors": [ { "first": "Sergiu", "middle": [], "last": "Nisioi", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "\u0160tajner", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Liviu P", "middle": [], "last": "Dinu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "85--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergiu Nisioi, Sanja \u0160tajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85-91.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Generating conceptual metaphors from proposition stores", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Ovchinnikova", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Zaytsev", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Wertheim", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Israel", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.7619" ] }, "num": null, "urls": [], "raw_text": "Ekaterina Ovchinnikova, Vladimir Zaytsev, Suzanne Wertheim, and Ross Israel. 2014. Generating con- ceptual metaphors from proposition stores. arXiv preprint arXiv:1409.7619.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Two puzzles for linguistic theory: Nativelike selection and nativelike fluency", "authors": [ { "first": "Andrew", "middle": [], "last": "Pawley", "suffix": "" }, { "first": "Frances", "middle": [ "Hodgetts" ], "last": "Syder", "suffix": "" } ], "year": 2014, "venue": "Language and communication", "volume": "", "issue": "", "pages": "203--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Pawley and Frances Hodgetts Syder. 2014. Two puzzles for linguistic theory: Nativelike selec- tion and nativelike fluency. In Language and com- munication, pages 203-239. Routledge.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Neural paraphrase generation with stacked residual lstm networks", "authors": [ { "first": "Aaditya", "middle": [], "last": "Prakash", "suffix": "" }, { "first": "A", "middle": [], "last": "Sadid", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Datla", "suffix": "" }, { "first": "Joey", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Oladimeji", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Farri", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2923--2934", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaditya Prakash, Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual lstm networks. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 2923- 2934.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.06535" ] }, "num": null, "urls": [], "raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. arXiv preprint arXiv:1803.06535.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Multiword expressions: hard going or plain sailing? Language Resources and Evaluation", "authors": [ { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Piao", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Evert", "suffix": "" }, { "first": "Begona", "middle": [], "last": "Villada Moir\u00f3n", "suffix": "" } ], "year": 2010, "venue": "", "volume": "44", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Rayson, Scott Piao, Serge Sharoff, Stefan Evert, and Begona Villada Moir\u00f3n. 2010. Multiword ex- pressions: hard going or plain sailing? Language Resources and Evaluation, 44(1-2):1-5.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Multiword expressions: A pain in the neck for nlp", "authors": [ { "first": "A", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "International conference on intelligent text processing and computational linguistics", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan A Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for nlp. In Interna- tional conference on intelligent text processing and computational linguistics, pages 1-15. Springer.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "An empirical study of the impact of idioms on phrase based statistical machine translation of en", "authors": [ { "first": "Giancarlo", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [], "last": "Kelleher", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giancarlo Salton, Robert Ross, and John Kelleher. 2014. An empirical study of the impact of idioms on phrase based statistical machine translation of en- glish to brazilian-portuguese.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "The parseme shared task on automatic identification of verbal multiword expressions", "authors": [ { "first": "Agata", "middle": [], "last": "Savary", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "Silvio", "middle": [ "Ricardo" ], "last": "Cordeiro", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Sangati", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Qasemi", "middle": [], "last": "Behrang", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Zadeh", "suffix": "" }, { "first": "Fabienne", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Voula", "middle": [], "last": "Cap", "suffix": "" }, { "first": "Ivelina", "middle": [], "last": "Giouli", "suffix": "" }, { "first": "", "middle": [], "last": "Stoyanova", "suffix": "" } ], "year": 2017, "venue": "The 13th Workshop on Multiword Expression at EACL", "volume": "", "issue": "", "pages": "31--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agata Savary, Carlos Ramisch, Silvio Ricardo Cordeiro, Federico Sangati, Veronika Vincze, Behrang Qasemi Zadeh, Marie Candito, Fabienne Cap, Voula Giouli, Ivelina Stoyanova, et al. 2017. The parseme shared task on automatic identification of verbal multiword expressions. In The 13th Work- shop on Multiword Expression at EACL, pages 31- 47.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Vocabulary in language teaching", "authors": [ { "first": "Norbert", "middle": [], "last": "Schmitt", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Schmitt", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Norbert Schmitt and Diane Schmitt. 2020. Vocabulary in language teaching. Cambridge university press.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Evaluating machine translation performance on chinese idioms with a blacklist method", "authors": [ { "first": "Yutong", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Fancellu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yutong Shao, Rico Sennrich, Bonnie Webber, and Fed- erico Fancellu. Evaluating machine translation per- formance on chinese idioms with a blacklist method.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Style transfer from non-parallel text by cross-alignment", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "6830--6841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural informa- tion processing systems, pages 6830-6841.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Automatic metaphor interpretation as a paraphrasing task", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1029--1037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Shutova. 2010a. Automatic metaphor inter- pretation as a paraphrasing task. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1029-1037. Association for Computational Linguistics.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Models of metaphor in nlp", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "688--697", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Shutova. 2010b. Models of metaphor in nlp. In Proceedings of the 48th annual meeting of the as- sociation for computational linguistics, pages 688- 697.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Statistical metaphor processing", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "2", "pages": "301--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekaterina Shutova, Simone Teufel, and Anna Korho- nen. 2013. Statistical metaphor processing. Compu- tational Linguistics, 39(2):301-353.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "A corpusbased study of idioms in academic speech", "authors": [ { "first": "Rita", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "Dushyanthi", "middle": [], "last": "Mendis", "suffix": "" } ], "year": 2003, "venue": "Tesol Quarterly", "volume": "37", "issue": "3", "pages": "419--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rita Simpson and Dushyanthi Mendis. 2003. A corpus- based study of idioms in academic speech. Tesol Quarterly, 37(3):419-441.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Fixed expressions and the production of idioms", "authors": [ { "first": "A", "middle": [], "last": "Simone", "suffix": "" }, { "first": "", "middle": [], "last": "Sprenger", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone A Sprenger. 2003. Fixed expressions and the production of idioms. Ph.D. thesis, Radboud Univer- sity Nijmegen Nijmegen.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Metaphoric paraphrase generation", "authors": [ { "first": "Kevin", "middle": [], "last": "Stowe", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Ribeiro", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.12854" ] }, "num": null, "urls": [], "raw_text": "Kevin Stowe, Leonardo Ribeiro, and Iryna Gurevych. 2020. Metaphoric paraphrase generation. arXiv preprint arXiv:2002.12854.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "transforming\" delete, retrieve, generate approach for controlled text style transfer", "authors": [ { "first": "Akhilesh", "middle": [], "last": "Sudhakar", "suffix": "" }, { "first": "Bhargav", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Maheswaran", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3260--3270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Ma- heswaran. 2019. \"transforming\" delete, retrieve, generate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3260- 3270.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "A computational system of metaphor generation with evaluation mechanism", "authors": [ { "first": "Asuka", "middle": [], "last": "Terai", "suffix": "" }, { "first": "Masanori", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2010, "venue": "International Conference on Artificial Neural Networks", "volume": "", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asuka Terai and Masanori Nakagawa. 2010. A compu- tational system of metaphor generation with evalua- tion mechanism. In International Conference on Ar- tificial Neural Networks, pages 142-147. Springer.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.05732" ] }, "num": null, "urls": [], "raw_text": "John Wieting and Kevin Gimpel. 2017. Paranmt-50m: Pushing the limits of paraphrastic sentence embed- dings with millions of machine translations. arXiv preprint arXiv:1711.05732.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "The functions of formulaic language: An integrated model", "authors": [ { "first": "Alison", "middle": [], "last": "Wray", "suffix": "" }, { "first": "R", "middle": [], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Perkins", "suffix": "" } ], "year": 2000, "venue": "Language & Communication", "volume": "20", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alison Wray and Michael R Perkins. 2000. The func- tions of formulaic language: An integrated model. Language & Communication, 20(1):1-28.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Sentence simplification by monolingual machine translation", "authors": [ { "first": "", "middle": [], "last": "Sander Wubben", "suffix": "" }, { "first": "Apj", "middle": [], "last": "Krahmer", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Bosch", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sander Wubben, EJ Krahmer, and APJ van den Bosch. 2012. Sentence simplification by monolingual ma- chine translation.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuancheng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "979--988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xu- ancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cy- cled reinforcement learning approach. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 979-988.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Problems in current text simplification research: New data can help", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "283--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification re- search: New data can help. Transactions of the Asso- ciation for Computational Linguistics, 3:283-297.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Quanze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "An endto-end generative architecture for paraphrase generation", "authors": [ { "first": "Qian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Wenlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3123--3133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Yang, Dinghan Shen, Yong Cheng, Wenlin Wang, Guoyin Wang, Lawrence Carin, et al. 2019. An end- to-end generative architecture for paraphrase gener- ation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3123-3133.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "How to avoid sentences spelling boring? towards a neural approach to unsupervised metaphor generation", "authors": [ { "first": "Zhiwei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "861--871", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiwei Yu and Xiaojun Wan. 2019. How to avoid sen- tences spelling boring? towards a neural approach to unsupervised metaphor generation. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 861-871.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Style example-guided text generation using generative adversarial transformers", "authors": [ { "first": "Kuo-Hao", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Shoeybi", "suffix": "" }, { "first": "Ming-Yu", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.00674" ] }, "num": null, "urls": [], "raw_text": "Kuo-Hao Zeng, Mohammad Shoeybi, and Ming-Yu Liu. 2020. Style example-guided text generation using generative adversarial transformers. arXiv preprint arXiv:2003.00674.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Sentence simplification with deep reinforcement learning", "authors": [ { "first": "Xingxing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.10931" ] }, "num": null, "urls": [], "raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sen- tence simplification with deep reinforcement learn- ing. arXiv preprint arXiv:1703.10931.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Integrating transformer and paraphrase rules for sentence simplification", "authors": [ { "first": "Sanqiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Daqing", "middle": [], "last": "He", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Saptono", "suffix": "" }, { "first": "Parmanto", "middle": [], "last": "Bambang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.11193" ] }, "num": null, "urls": [], "raw_text": "Sanqiang Zhao, Rui Meng, Daqing He, Saptono Andi, and Parmanto Bambang. 2018. Integrating trans- former and paraphrase rules for sentence simplifica- tion. arXiv preprint arXiv:1810.11193.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "From solving a problem boldly to cutting the gordian knot: Idiomatic text generation", "authors": [ { "first": "Jianing", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Srihari", "middle": [], "last": "Nanniyur", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.06541" ] }, "num": null, "urls": [], "raw_text": "Jianing Zhou, Hongyu Gong, Srihari Nanniyur, and Suma Bhat. 2021. From solving a problem boldly to cutting the gordian knot: Idiomatic text generation. arXiv preprint arXiv:2104.06541.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Gruen for evaluating linguistic quality of generated text", "authors": [ { "first": "Wanzheng", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", "volume": "", "issue": "", "pages": "94--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wanzheng Zhu and Suma Bhat. 2020. Gruen for evalu- ating linguistic quality of generated text. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 94-108.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "A monolingual tree-based translation model for sentence simplification", "authors": [ { "first": "Zhemin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Delphine", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1353--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1353-1361.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "State-of-the-art machine translations of \"Vote them out!\" into different languages mean the opposite.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "An example from our dataset. Idioms are highlighted in blue, and their literal paraphrases are in red.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Transformer my life starts from you and anything at you , so you are my first sight and my last . Seq2Seq with copy my life starts from you and at you you you you you you my my and . Transformer with copy My life starts from you and ends at you , so you are my first and my last .", "uris": null }, "TABREF1": { "num": null, "html": null, "content": "
Statistics# of instances Avg. # of words
Idioms8233.2
Sense8627.9
Idiomatic sent517019.0
Literal sent517018.5
", "type_str": "table", "text": "Comparison of our dataset with related datasets. Training, validation and testing size splits are provided when applicable. Data in all these datasets is a combination of collection from the wild and manual generation. In our corpus, original sentences are idiomatic sentences and target sentences are literal sentences." }, "TABREF2": { "num": null, "html": null, "content": "
% n-gramsPIEPara-NMTWiki-LargeMetaphor
uni-grams13.86 46.3436.216.88
bi-grams23.60 71.2452.5636.59
tri-grams30.19 82.2658.7559.61
4-grams36.51 86.4662.7974.41
", "type_str": "table", "text": "Statistics of our parallel corpus." }, "TABREF4": { "num": null, "html": null, "content": "", "type_str": "table", "text": "Statistics of sense distribution. An idiom has an average of 1.05 senses." }, "TABREF5": { "num": null, "html": null, "content": "
ModelBLEU s2i i2ss2iSARIi2sGRUEN s2i i2s
Seq2Seq25.16
", "type_str": "table", "text": "42.96 24.13 33.89 32.25 33.45 Seq2Seq with copy 38.02 47.58 43.02 49.69 27.79 32.84 Transformer 45.58 46.65 36.67 38.62 44.05 44.06 Transformer with copy 59.56 57.91 39.93 45.10 59.27 52.25 Pretrained BART 79.32 78.53 62.30 61.82 77.49 78.03 Pipeline 65.56 70.03 67.64 62.45 67.27 74.16" }, "TABREF8": { "num": null, "html": null, "content": "
ModelBLEU ROUGE-1 ROUGE-2 ROUGE-L METEOR SARI GRUEN Perplexity
Seq2Seq42.9662.4340.4662.5459.3633.8933.459.54
Seq2Seq with copy47.5871.6750.2076.7777.2349.6932.8421.85
Transformer46.6560.9043.3461.3969.8238.6244.0610.59
Transformer with copy57.9168.4454.9769.5979.1745.1052.254.61
Pretrained BART78.5384.6477.2184.9585.3661.8278.035.35
Pipeline70.0378.5068.3978.9083.6562.4574.164.25
", "type_str": "table", "text": "Performance comparison of baselines for idiomatic sentence generation" }, "TABREF9": { "num": null, "html": null, "content": "
Attributehigh non-compositionality
", "type_str": "table", "text": "Performance comparison of baselines for idiomatic sentence paraphrasing" } } } }