{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:28:50.833190Z" }, "title": "Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "caoyixin2011@gmail.com" }, { "first": "Ruihao", "middle": [], "last": "Shui", "suffix": "", "affiliation": {}, "email": "ruihaoshui@u.nus.edu" }, { "first": "Liangming", "middle": [], "last": "Pan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "country": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The curse of knowledge can impede communication between experts and laymen. We propose a new task of expertise style transfer and contribute a manually annotated dataset with the goal of alleviating such cognitive biases. Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions using simple words. This is a challenging task, unaddressed in previous work, as it requires the models to have expert intelligence in order to modify text with a deep understanding of domain knowledge and structures. We establish the benchmark performance of five stateof-the-art models for style transfer and text simplification. The results demonstrate a significant gap between machine and human performance. We also discuss the challenges of automatic evaluation, to provide insights into future research directions. The dataset is publicly available at https://srhthu.github. io/expertise-style-transfer/.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The curse of knowledge can impede communication between experts and laymen. We propose a new task of expertise style transfer and contribute a manually annotated dataset with the goal of alleviating such cognitive biases. Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions using simple words. This is a challenging task, unaddressed in previous work, as it requires the models to have expert intelligence in order to modify text with a deep understanding of domain knowledge and structures. We establish the benchmark performance of five stateof-the-art models for style transfer and text simplification. The results demonstrate a significant gap between machine and human performance. We also discuss the challenges of automatic evaluation, to provide insights into future research directions. The dataset is publicly available at https://srhthu.github. io/expertise-style-transfer/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The curse of knowledge (Camerer et al., 1989 ) is a pervasive cognitive bias exhibited across all domains, leading to discrepancies between an expert's advice and a layman's understanding of it (Tan and Goonawardene, 2017). Take medical consultations as an example: patients often find it difficult to understand their doctors' language. On the other hand, it is important for doctors to accurately disclose the exact illness conditions based on patients' simple vocabulary. Misunderstanding may lead to failures in diagnosis and prompt treatment, or even death. How to automatically adjust the expertise level of texts is critical for effective communication.", "cite_spans": [ { "start": 23, "end": 44, "text": "(Camerer et al., 1989", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a new task of text style transfer between expert language and layman language, namely Expertise Style Transfer, and contribute a manually annotated dataset in the medical Many cause dyspnea, pleuritic chest pain, or both.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most common symptoms, regardless of the type of fluid in the pleural space or its cause, are shortness of breath and chest pain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "About 1/1000 hypertensive patients has a pheochromocytoma.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The incidence of Pheochromocytomas may be quite small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The lesion slowly enlarges, often ulcerates, and spread to other skin areas. Lesions heal slowly, with scarring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The sores slowly enlarge and spread to nearby tissue, causing further damage. Sores heal slowly and may result in permanent scarring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In patients with papilledema, vision is usually not affected initially, but seconds-long graying out of vision, flickering, or blurred or double vision may occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At first, papilledema may be present without affecting vision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fleeting vision changes (blurred vision, double vision, flickering, or complete loss of vision) typically lasting seconds are characteristic of papilledema. domain for this task. We show four examples in Figure 1 , where the upper sentence is for professionals and the lower one is for laymen. On one hand, expertise style transfer aims at improving the readability of a text by reducing the expertise level, such as explaining the complex terminology dyspnea in the first example with a simple phrase shortness of breath. On the other hand, it also aims to improve the expertise level based on context, so that laymen's expressions can be more accurate and professional. For example, in the second pair, causing further damage is not as accurate as ulcerates, omitting the important mucous and disintegrative conditions of the sores.", "cite_spans": [], "ref_spans": [ { "start": 204, "end": 212, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are two related tasks, but neither serve as suitable prior art. The first is text style transfer (ST), which generates texts with different attributes but with the same content. However, although existing approaches have achieved a great success regarding the attributes of sentiment and formality (Rao and Tetreault, 2018) among oth-ers, expertise \"styling\" has not been explored yet. Another similar task is Text Simplification (TS), which rewrites a complex sentence with simple structures (Sulem et al., 2018b) while constrained by limited vocabulary (Paetzold and Specia, 2016) . This task can be regarded as similar to our subtask: reducing the expertise level from expert to layman language without considering the opposing direction. However, most existing TS datasets are derived from Wikipedia, and contain numerous noise (misaligned instances) and inadequacies (instances having non-simplified targets) (Xu et al., 2015; Surya et al., 2019) ; in which further detailed discussion can be found in Section 3.2.", "cite_spans": [ { "start": 304, "end": 329, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF38" }, { "start": 499, "end": 520, "text": "(Sulem et al., 2018b)", "ref_id": "BIBREF45" }, { "start": 561, "end": 588, "text": "(Paetzold and Specia, 2016)", "ref_id": "BIBREF33" }, { "start": 920, "end": 937, "text": "(Xu et al., 2015;", "ref_id": "BIBREF52" }, { "start": 938, "end": 957, "text": "Surya et al., 2019)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we construct a manually-annotated dataset for expertise style transfer in medical domain, named MSD, and conduct deep analysis by implementing state-of-the-art (SOTA) TS and ST models. The dataset is derived from human-written medical references, The Merck Manuals 1 , which include two parallel versions of texts, one tailored for consumers and the other for healthcare professionals. For automatic evaluation, we hire doctors to annotate the parallel sentences between the two versions (examples shown in Figure 1 ). Compared with both ST and TS datasets, MSD is more challenging from two aspects:", "cite_spans": [], "ref_spans": [ { "start": 522, "end": 530, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Knowledge Gap. Domain knowledge is the key factor that influences the expertise level of text, which is also a key difference from conventional styles. We identify two major types of knowledge gaps in MSD: terminology, e.g., dyspnea in the first example; and empirical evidence. As shown in the third pair, doctors prefer to use statistics (About 1/1000), while laymen do not (quite small).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexical & Structural Modification. Fu et al. (2019) has indicated that most ST models only perform lexical modification, while leaving structures unchanged. Actually, syntactic structures play a significant role in language styles, especially regarding complexity or simplicity (Carroll et al., 1999) . As shown in the last example, a complex sentence can be expressed with several simple sentences by appropriately splitting content. However, available datasets rarely contain such cases.", "cite_spans": [ { "start": 45, "end": 51, "text": "(2019)", "ref_id": "BIBREF29" }, { "start": 278, "end": 300, "text": "(Carroll et al., 1999)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions can be summarized as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose the new task of expertise style transfer, which aims to facilitate communication between experts and laymen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We contribute a challenging dataset that requires knowledge-aware and structural modification techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We establish benchmark performance and discuss key challenges of datasets, models and evaluation metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing ST work has achieved promising results on the styles of sentiment (Hu et al., 2017; Shen et al., 2017) , formality (Rao and Tetreault, 2018) , offensiveness (dos Santos et al., 2018) , politeness (Sennrich et al., 2016) , authorship (Xu et al., 2012) , gender and ages (Prabhumoye et al., 2018; Lample et al., 2019) , etc. Nevertheless, only a few of them focus on supervised methods due to the limited availability of parallel corpora. Jhamtani et al. (2017) extract modern language based Shakespeare's play from the educational site, while Rao and Tetreault (2018) and utilize crowdsourcing techniques to rewrite sentences from Yahoo Answers, Yelp and Amazon reviews, which are then utilized for training neural machine translation (NMT) models and evaluation. More practically, there is an enthusiasm for unsupervised methods without parallel data. There are three groups. The first group is Disentanglement methods that learn disentangled representations of style and content, and then directly manipulating these latent representations to control style-specific text generation. Shen et al. (2017) propose a cross-aligned autoencoder that learns a shared latent content space between true samples and generated samples through an adversarial classifier. Hu et al. (2017) utilize neural generative model, Variational Autoencoders (VAEs) (Kingma and Welling, 2013), to represent the content as continuous variables with standard Gaussian prior, and reconstruct style vector from the generated samples via an attribute discriminator. To improve the ability of style-specific generation, Fu et al. (2018) utilize multiple generators, which are then extended by a Wasserstein distance regularizer . SHAPED (Zhang et al., 2018a ) learns a shared and several private encoder-decoder frameworks to capture both common and distinguishing features. Some variants further investigate the auxiliary tasks to better preserve contents (John et al., 2019) , or domain adaptation .", "cite_spans": [ { "start": 75, "end": 92, "text": "(Hu et al., 2017;", "ref_id": "BIBREF18" }, { "start": 93, "end": 111, "text": "Shen et al., 2017)", "ref_id": "BIBREF42" }, { "start": 124, "end": 149, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF38" }, { "start": 166, "end": 191, "text": "(dos Santos et al., 2018)", "ref_id": "BIBREF39" }, { "start": 205, "end": 228, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF40" }, { "start": 242, "end": 259, "text": "(Xu et al., 2012)", "ref_id": "BIBREF54" }, { "start": 278, "end": 303, "text": "(Prabhumoye et al., 2018;", "ref_id": "BIBREF37" }, { "start": 304, "end": 324, "text": "Lample et al., 2019)", "ref_id": null }, { "start": 551, "end": 575, "text": "Rao and Tetreault (2018)", "ref_id": "BIBREF38" }, { "start": 1093, "end": 1111, "text": "Shen et al. (2017)", "ref_id": "BIBREF42" }, { "start": 1268, "end": 1284, "text": "Hu et al. (2017)", "ref_id": "BIBREF18" }, { "start": 1598, "end": 1614, "text": "Fu et al. (2018)", "ref_id": "BIBREF15" }, { "start": 1715, "end": 1735, "text": "(Zhang et al., 2018a", "ref_id": "BIBREF56" }, { "start": 1935, "end": 1954, "text": "(John et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Text Style Transfer", "sec_num": "2.1" }, { "text": "Another line of work argues that it is difficult to disentangle style from content. Thus, their main idea is to learn style-specific translations, which are trained using unaligned data based on backtranslation Prabhumoye et al., 2018; Lample et al., 2019) , pseudo parallel sentences according to semantic similarity (Jin et al., 2019) , or cyclic reconstruction (Dai et al., 2019) , marked with Translation methods.", "cite_spans": [ { "start": 211, "end": 235, "text": "Prabhumoye et al., 2018;", "ref_id": "BIBREF37" }, { "start": 236, "end": 256, "text": "Lample et al., 2019)", "ref_id": null }, { "start": 318, "end": 336, "text": "(Jin et al., 2019)", "ref_id": "BIBREF20" }, { "start": 364, "end": 382, "text": "(Dai et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Text Style Transfer", "sec_num": "2.1" }, { "text": "The third group is Manipulation methods. first identify the style words by their statistics, then replace them with similar retrieved sentences with a target style. Xu et al. (2018) jointly train the two steps with a neutralization module and a stylization module based on reinforcement learning. For better stylization, Zhang et al. (2018b) introduce a learned sentiment memory network, while John et al. 2019utilize hierarchical reinforcement learning.", "cite_spans": [ { "start": 165, "end": 181, "text": "Xu et al. (2018)", "ref_id": "BIBREF51" }, { "start": 321, "end": 341, "text": "Zhang et al. (2018b)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Text Style Transfer", "sec_num": "2.1" }, { "text": "Earlier work on text simplification define a sentence as simple, if it has more frequent words, shorter length and fewer syllables per word, etc. This motivates a variety of syntactic rule-based methods, such as reducing sentence length (Chandrasekar and Srinivas, 1997; Vickrey and Koller, 2008) , lexical substitution (Glavas and Stajner, 2015; Paetzold and Specia, 2016) or sentence splitting (Woodsend and Lapata, 2011; Sulem et al., 2018b) . Another line of work follows the success of machine translation (MT) (Klein et al., 2017) , and regards TS as a monolingual translation from complex language to simple language (Zhu et al., 2010; Coster and Kauchak, 2011; Wubben et al., 2012) . Zhang and Lapata (2017) incorporate reinforcement learning into the encoder-decoder framework to encourage three types of simplification rewards concerning language simplicity, relevance and fluency, while Shardlow and Nawaz (2019) improve the performance of MT models by introducing explanatory synonyms. To alleviate the heavy burden of parallel training corpora, Surya et al. (2019) propose an unsupervised model via adversarial learning between a shared encoder and separate decoders.", "cite_spans": [ { "start": 237, "end": 270, "text": "(Chandrasekar and Srinivas, 1997;", "ref_id": "BIBREF8" }, { "start": 271, "end": 296, "text": "Vickrey and Koller, 2008)", "ref_id": "BIBREF48" }, { "start": 320, "end": 346, "text": "(Glavas and Stajner, 2015;", "ref_id": "BIBREF16" }, { "start": 347, "end": 373, "text": "Paetzold and Specia, 2016)", "ref_id": "BIBREF33" }, { "start": 396, "end": 423, "text": "(Woodsend and Lapata, 2011;", "ref_id": "BIBREF49" }, { "start": 424, "end": 444, "text": "Sulem et al., 2018b)", "ref_id": "BIBREF45" }, { "start": 516, "end": 536, "text": "(Klein et al., 2017)", "ref_id": "BIBREF26" }, { "start": 624, "end": 642, "text": "(Zhu et al., 2010;", "ref_id": "BIBREF60" }, { "start": 643, "end": 668, "text": "Coster and Kauchak, 2011;", "ref_id": "BIBREF10" }, { "start": 669, "end": 689, "text": "Wubben et al., 2012)", "ref_id": "BIBREF50" }, { "start": 692, "end": 715, "text": "Zhang and Lapata (2017)", "ref_id": "BIBREF55" }, { "start": 898, "end": 923, "text": "Shardlow and Nawaz (2019)", "ref_id": "BIBREF41" }, { "start": 1071, "end": 1077, "text": "(2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Text Simplification", "sec_num": "2.2" }, { "text": "The simplicity of language in the medical domain is particularly important. Terminologies are one of the main obstacles to understanding, and extracting their explanations could be helpful for TS (Shardlow and Nawaz, 2019) . Del\u00e9ger and Zweigenbaum (2008) detect paraphrases from comparable medical corpora of specialized and lay texts, and Kloehn et al. (2018) explore UMLS (Bodenreider, 2004) and WordNet (Miller, 2009) with word embedding techniques. Furthermore, Van den Bercken et al. (2019) directly align sentences from medical terminological articles in Wikipedia and Simple Wikipedia 2 , which confines the editors' vocabulary to only 850 basic English words. Then, they refine these aligned sentences by experts towards automatic evaluation. However, the Wikipedia-based dataset is still noisy (with misaligned instances) and inadequate (instances having non-simplified targets) with respect to both model training and testing. Besides, it is usually ignored that the opposite direction of TS -improving the expertise levels of layman language for accuracy and professionality -is also critical for better communication.", "cite_spans": [ { "start": 196, "end": 222, "text": "(Shardlow and Nawaz, 2019)", "ref_id": "BIBREF41" }, { "start": 225, "end": 255, "text": "Del\u00e9ger and Zweigenbaum (2008)", "ref_id": "BIBREF12" }, { "start": 341, "end": 361, "text": "Kloehn et al. (2018)", "ref_id": "BIBREF27" }, { "start": 375, "end": 394, "text": "(Bodenreider, 2004)", "ref_id": "BIBREF1" }, { "start": 407, "end": 421, "text": "(Miller, 2009)", "ref_id": null }, { "start": 490, "end": 496, "text": "(2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Text Simplification", "sec_num": "2.2" }, { "text": "To sum up, both tasks lack parallel data for training and evaluation. This prevents researchers from exploring more advanced models concerning the knowledge gap as well as linguistic modification of lexicons and structures. In this work, we define a more useful and challenging task of expertise style transfer with high-quality parallel sentences for evaluation. Besides, the two communities of ST and TS can shed lights to each other on sentence modification techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.3" }, { "text": "We describe our dataset construction that comprises three steps: data preprocessing, expert annotation and knowledge incorporation. We then give a detailed analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Design", "sec_num": "3" }, { "text": "The Merck Manuals, also known as the MSD Manuals, have been the world's most trusted health reference for over 100 years. It covers a wide range of medical topics, and is written through a collaboration between hundreds of medical experts, supervised by independent editors. For each topic, it includes two versions: one tailored for consumers and the other for professionals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Construction", "sec_num": "3.1" }, { "text": "Step 1: Data Preprocessing. Although the two versions of documents refer to the same topic, they are not aligned, as each document is written independently. We first collect the raw texts from the MSD website 3 , and obtain 2601 professional and 2487 consumer documents with 1185 internal links among them. We then split each document into sentences, with the resultant distribution of medical topics as shown in Figure 2 . Finally, to alleviate the annotation burden, we find possible parallel groups of sentences by matching their document titles and subsection titles, which denote medical PCIO elements, such as the Diagnosis and Symptoms. Specifically, we first disambiguate the internal links by matching the document title and its accompanied ICD-9 code. Then, we manually align medical PCIO elements in the two versions to provide fine-grained internal links. For example, all sentences for Atherosclerosis.Symptoms in the professional MSD may be aligned with those for Atherosclerosis.Signs in the consumer MSD. We thus obtain 2551 linked sentence groups as candidates for experts to annotate. Each group contains 10.40 and 11.33 sentences on average for the professional and consumer versions, respectively. We then randomly sample 1000 linked groups for expert annotations in the next section 4 .", "cite_spans": [], "ref_spans": [ { "start": 413, "end": 421, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dataset Construction", "sec_num": "3.1" }, { "text": "Step 2: Expert Annotation. Given the aligned groups of sentences in professional and consumer MSD, we develop an annotation platform to facilitate expert annotations. We hire three doctors to select sentences from each version of group to annotate pairs of sentences that have the same meaning but are written in different styles. The hired doctors are formally medically trained, and are qualified to understand the semantics of the medical texts. To avoid subjective judgments in the annotations, they are not allowed to change the content. Particularly, the doctors are Chinese who also know English as a second language. Thus, we provide the English content accompanied with a Chinese translation as assistance, which helps to increase the annotation speed while ensuring quality. We also conduct verification on each pair of parallel sentences with the help of another doctor. Note that each pairing may contain multiple professional and consumer sentences; i.e., multiple alignment is possible, the alignments are not necessarily oneto-one. The strict procedure also discards many aligned groups, leading to 675 annotations for testing, with distribution of medical PCIO elements as shown in Figure 3 . Step 3: Knowledge Incorporation. To facilitate knowledge-aware analysis, we can utilize information extraction techniques (Cao et al., 2018a (Cao et al., , 2019 to identify medical concepts in each sentence. Here, we use QuickUMLS (Soldaini and Goharian, 2016) to automatically link entity mentions to Unified Medical Language System (UMLS) (Bodenreider, 2004) . Note that each mention may refer to multiple concepts, each for which we align to the highest ranked one. As shown in Table 2 : Statistics of MSD and SimpWiki. One annotation may contain multiple sentences, and MSD Train has no parallel annotations due to expensive expert cost. The ratio of layman to expert according to each metric denotes the gap between the two styles, and a higher value implies smaller differences except that for #Sentence.", "cite_spans": [ { "start": 1331, "end": 1349, "text": "(Cao et al., 2018a", "ref_id": "BIBREF3" }, { "start": 1350, "end": 1369, "text": "(Cao et al., , 2019", "ref_id": "BIBREF5" }, { "start": 1440, "end": 1469, "text": "(Soldaini and Goharian, 2016)", "ref_id": "BIBREF43" }, { "start": 1550, "end": 1569, "text": "(Bodenreider, 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1198, "end": 1206, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1690, "end": 1697, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset Construction", "sec_num": "3.1" }, { "text": "mention dyspnea is linked to concept C0013404. Through this three step process, we obtain a large set of (non-parallel) training sentences in each style, and a small set of parallel sentences for evaluation. The detailed statistics as compared with other datasets can be found in Table 2 and Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 299, "text": "Table 2 and Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Dataset Construction", "sec_num": "3.1" }, { "text": "Let us compare our MSD dataset against both publicly available ST and TS datasets. Simp-Wiki (Van den Bercken et al., 2019) is a TS dataset derived from the linked articles between Simple Wikipedia and Normal Wikipedia. It focuses on the medical domain and extracts parallel sentences automatically by computing their BLEU scores. GYAFC (Rao and Tetreault, 2018) is the largest ST dataset on formality in the domains of Entertainment & Music (E&M) and Family & Relationships (F&R) from Yahoo Answers. It contains more than 50,000 training sentences (non-parallel) for each domain, and over 1,000 parallel sentences for testing, obtained by rewriting informal answers via Amazon Mechanical Turk. Yelp and Amazon are sentiment ST datasets by rewriting reviews based on crowdsourcing. They both contain over 270k training sentences (non-parallel) and 500 parallel sentences for evaluation. Authorship (Xu et al., 2012) aims at transferring styles between modern English and Shakespearean English. It contains 18,395 sentences for training (non-parallel) and 1,462 sentence pairs for testing. Table 2 presents the statistics of expertise and layman sentences in our dataset as well as Sim-pWiki. We split the sentences using NLTK, and compute the ratio of layman to expert in each metric to denote the gap between the two styles (a lower value implies a smaller gap expect that for #Sentence). Three standard readability indices are used to evaluate the simplicity levels: FleshKincaid (Kincaid et al., 1975) , Gunning (Gunning, 1968 ) and Coleman (Coleman and Liau, 1975) . The lower the indices are, the simpler the sentence is. Note that SimpWiki does not provide a train/test split, and thus we randomly sample 350 sentence pairs for evaluation. We follow the same strategy in our experiments.", "cite_spans": [ { "start": 337, "end": 362, "text": "(Rao and Tetreault, 2018)", "ref_id": "BIBREF38" }, { "start": 898, "end": 915, "text": "(Xu et al., 2012)", "ref_id": "BIBREF54" }, { "start": 1482, "end": 1504, "text": "(Kincaid et al., 1975)", "ref_id": "BIBREF23" }, { "start": 1515, "end": 1529, "text": "(Gunning, 1968", "ref_id": "BIBREF17" }, { "start": 1544, "end": 1568, "text": "(Coleman and Liau, 1975)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 1089, "end": 1096, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset Analysis", "sec_num": "3.2" }, { "text": "Compared with SimpWiki, we can see that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Statistics", "sec_num": null }, { "text": "(1) MSD evaluates the structure modifications. As the layman language usually requires more simple sentences to express the same meaning as in the expert language, each expert sentence in MSD Test refers to 1.13 layman sentences on average, while the number in SimpWiki is only 0.99. (2) MSD is more distinct between the two styles, which is critical for style transfer. This is markedly demonstrated by the larger difference between their (concepts) vocabulary sizes (0.62/0.81 vs. 0.85 in ratio of layman to expert), and between the readability indices (0.81/0.81 vs. 0.84 on average). (3) we have more complex professional sentences in expert language (14.57/14.07 vs. 13.55 in the three readability indices on average) but comparatively simple sentences in laymen language (11.89/11.45 vs. 11.40 ). This is intuitive because both versions of Wikipedia are written by crowdsourcing editors, and MSD is written by experts in medical domain.", "cite_spans": [ { "start": 777, "end": 799, "text": "(11.89/11.45 vs. 11.40", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Statistics", "sec_num": null }, { "text": "One of the main concerns in ST is the limitations of parallel sentences towards automatic evaluation. On one hand, assuming that the parallel sentences have the same meaning, many datasets find the aligned sentences to have higher string overlap (as measured by BLEU). On the other hand, the two sentences should have different styles, and may vary a lot in expressions: and thus leading to a lower BLEU. Hence how to build a testing dataset that considers both criteria is critical. We analyze the quality of testing sentence pairs in each dataset. (Fu et al., 2019) . Higher BLEUs imply two more similar sentences, while higher edit distances imply more heterogeneous structures. Table 3 presents the BLEU and edit distance (ED for short) scores. Note that each pair of parallel sentences is verified to convey the same meaning during annotation. We see that: (1) MSD has the lowest BLEU and highest ED. This implies that MSD is very challenging that requires both lexical and structural modifications. (2) TS datasets reflect more structural differences (with higher ED values) as compared to ST datasets. This means that TS datasets concerning the nature of language complexity (simplicity) are more complex to transfer.", "cite_spans": [ { "start": 550, "end": 567, "text": "(Fu et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 682, "end": 689, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Quality of Parallel Sentences", "sec_num": null }, { "text": "We reimplement five SOTA models from prior TS and ST studies on both MSD and SimpWiki datasets. A further ablation study gives a detailed analysis of the knowledge and structure impacts, and highlights the challenges of existing metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We choose the following methods to establish benchmark performance on the two datasets on expertise style transfer, because they: (1) achieve SOTA performance in their fields; (2) are typical methods (as grouped in Section 2); and (3) release codes for reimplementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1" }, { "text": "The TS models 5 selected are: (1) Supervised model OpenNMT+PT that incorporates a phrase table into OpenNMT (Klein et al., 2017) , which provides guidance for replacing complex words with their simple synonym (Shardlow and Nawaz, 2019) ; and (2) Unsupervised model UNTS that utilizes adversarial learning (Surya et al., 2019) .", "cite_spans": [ { "start": 108, "end": 128, "text": "(Klein et al., 2017)", "ref_id": "BIBREF26" }, { "start": 209, "end": 235, "text": "(Shardlow and Nawaz, 2019)", "ref_id": "BIBREF41" }, { "start": 305, "end": 325, "text": "(Surya et al., 2019)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1" }, { "text": "The models for ST task selected are: (1) Disentanglement method ControlledGen (Hu et al., 2017 ) that utilizes VAEs to learn content representations following a Gaussian prior, and reconstructs a style vector via a discriminator; (2) Manipulation method DeleteAndRetrieve that first identifies style words with a statistical method, then replaces them with target style words derived from given corpus; and (3) Translation method StyleTransformer (Dai et al., 2019) that uses cyclic reconstruction to learn content and style vectors without parallel data.", "cite_spans": [ { "start": 78, "end": 94, "text": "(Hu et al., 2017", "ref_id": "BIBREF18" }, { "start": 447, "end": 465, "text": "(Dai et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1" }, { "text": "We use the pre-trained OpenNMT+PT model released by the authors 6 . Other models are trained using MSD and SimpWiki training data. We leave 20% of the training data for validation. The training settings follow the standard best practice; where all models are trained using Adam (Kingma and Ba, 2015) with mini-batch size 32, and the hyperparameters are tuned on the validation set. We set the shared parameters the same for baseline models: the maximum sequence length is 100, the word embeddings are initialized with 300-dimensional GloVe (Pennington et al., 2014) , learning rate is set to 0.001, and adaptive learning rate decay is applied. We adopt early stopping and dropout rate is set to 0.5 for both encoder and decoder.", "cite_spans": [ { "start": 540, "end": 565, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.2" }, { "text": "Following Dai et al. (2019) , we make an automatic evaluation on three aspects:", "cite_spans": [ { "start": 10, "end": 27, "text": "Dai et al. (2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Style Accuracy (marked as Acc) aims to measure how accurate the model controls sentence style. We train two classifiers on the training set of each dataset using fasttext (Joulin et al., 2017) .", "cite_spans": [ { "start": 171, "end": 192, "text": "(Joulin et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Fluency (marked as PPL) is usually measured by the perplexity of the transferred sentence. We fine-tune the state-of-the-art pretrained language model, Bert (Devlin et al., 2019) , on the training set of each dataset for each style.", "cite_spans": [ { "start": 157, "end": 178, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Content Similarity measures how much content is preserved during style transfer. We calculate 4-gram BLEU (Papineni et al., 2002) between model outputs and inputs (marked as self-BLEU), and between outputs and gold human references (marked as ref-BLEU).", "cite_spans": [ { "start": 106, "end": 129, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "Automatic metrics for content similarity are arguably unreliable, since the original inputs usually achieve the highest scores (Fu et al., 2019) . We Table 4 : Overall performance based on style transfer evaluation metrics from expertise to laymen language (marked as E2L) and in the opposite direction (L2E). Gold denotes human references.", "cite_spans": [ { "start": 127, "end": 144, "text": "(Fu et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "thus also conduct human evaluation. To evaluate over the entire test set, only layman annotators are involved, but we ensure that the layman style sentences are accompanied as references to assist understanding. Each annotator is asked to rate the model output given both input and gold references. The rating ranges from 1 to 5, where higher values indicate that more semantic content is preserved. Text Simplification Measurement. The above metrics may not perform well regarding language simplicity (Sulem et al., 2018a) . So, we also utilize a TS evaluation metrics: SARI (Xu et al., 2016) . It compares the n-grams of the outputs against those of the input and human references, and considers the added, deleted and kept words by the system. Table 4 present the overall performance. Since each pair of parallel sentences has been verified during annotation, we did not report human scores to avoid repeated evaluations. We can see that:", "cite_spans": [ { "start": 502, "end": 523, "text": "(Sulem et al., 2018a)", "ref_id": "BIBREF44" }, { "start": 576, "end": 593, "text": "(Xu et al., 2016)", "ref_id": "BIBREF53" } ], "ref_spans": [ { "start": 747, "end": 754, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.3" }, { "text": "(1) Parallel sentences in MSD have higher quality than SimpWiki, because our gold references are more fluent (4.29 vs. 7.65 in perplexity on average) and more discriminable (91% vs. 60% on average style accuracy). (2) The transfer for L2E is more difficult (except in content similarity) than that for E2L: 39.55% vs. 42.50% in Acc on average, 11.50 vs. 10.33 in PPL on average and 2.80 vs. 2.63 in human ratings on average. This is because the increase in expertise levels requires more contexts and knowledge, and is harder than simplification. (3) TS models perform similarly with ST models. Besides, supervised model OpenNMT+PT outperforms the unsupervised UNTS in fluency and content similarity due to the additional supervision signals. On the other hand, UNTS achieves higher Acc since it utilizes more non-parallel training data. (4) The style accuracy is the reverse to content sim-ilarity, making it more challenging to propose a comprehensive evaluation metric that can balance the two opposite directions. In terms of content similarity, even if both self-BLEU and ref-BLEU show a strong correlation with human ratings (over 0.98 Pearson coefficient with p-value< 0.0001), the higher scores of ControlledGen cannot demonstrate its superior performance, as it actually makes little modifications to styles. Instead, DeleteAn-dRetrieve, presents a strong ability to control styles (70% on average in Acc on MSD), but hardly preserves the contents. Style Transformer performs more stably.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Performance", "sec_num": "4.4" }, { "text": "Next, we discuss key factors of MSD. We take the E2L as the exemplar for discussion, as we have observed similar results for the opposing direction. Figure 4a shows the performance curves of BLEU and style accuracy. We choose the concept range to ensure they contain similar number of sentences. Along with the increasing number of concepts, we can see a downward BLEU trend. This is because it becomes more difficult to preserve content when the sentence is more professional. As for style accuracy, DeleteAndRetrieve achieves the peak around [8, 12) concepts, while the performance of other models drops gradually. Clearly, a lower number of concepts benefit the model for better understanding the sentences due to their correlated semantics, but a larger number of concepts requires knowledgeaware text understanding. Figure 4b presents the performance curves regarding the structure differences, where the edit distance is computed as mentioned in Section 3.2. Higher score denotes more heterogeneous structures. We see a similar trend with the curves of concepts. That is, existing models perform well in simple cases (fewer concepts and less structural differences), but becomes worse if the language is complex. We doubt that the encoder in each model is able to understand the domain-specific language sufficient well without considering knowledge. We thus propose a simple variant of ControlledGen by introducing terminology definitions, and observe some interesting findings in Section 4.10.", "cite_spans": [ { "start": 544, "end": 547, "text": "[8,", "ref_id": null }, { "start": 548, "end": 551, "text": "12)", "ref_id": null } ], "ref_spans": [ { "start": 149, "end": 158, "text": "Figure 4a", "ref_id": "FIGREF4" }, { "start": 821, "end": 830, "text": "Figure 4b", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Overall Performance", "sec_num": "4.4" }, { "text": "The style of medical PCIO elements (e.g., symptoms) are slightly different. We separately evaluate each model and present the results in Figure 4c . Style accuracy remains similar among these medical PCIO elements, but there are significant differences among the models in their performance for preserving content. Specifically, models perform well for those sentences about treatment, but perform poorly for evaluation, because this type of sentences usually involve many rare terms, challenging understanding. Table 5 : Performance using SARI. Table 5 presents the performance based on the TS evaluation metric, SARI. We utilize the Python package 7 and follow the settings in the original paper. Surprisingly, SARI on MSD presents a relatively comprehensive evaluation that is consistent with the above analysis as well as our intuition. ControlledGen and OpenNMT+PT are ranked lower since they tend to simply repeat the input. DeleteAndRetrieve and UNTS are ranked in the middle due to the accurate style transfer but poor content preservation. StyleTransformer is ranked highest as it performs stably in Table 4 and Figure 4a, 4b, 4c . This inspires us to further investigate automatic evaluation metrics based on TS studies, which is our ongoing work. Even so, we still recommend necessary human evaluation in the current stage. Table 6 presents two examples of transferred sentences. In the first example, both OpenNMT+PT and UNTS make lexical changes: replacing progresses with goes. DeleteAndRetrieve transfers style successfully but also changes the content slightly. The other two output the original expert sentence, that is the reason why they achieve higher BLEU (also PPL) but fails in Acc. Manipulation method (i.e., DeleteAndRetrieve) is more progressive in changing the style, but disentanglement method, ControlledGen, prefers to stay the same.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 146, "text": "Figure 4c", "ref_id": "FIGREF4" }, { "start": 512, "end": 519, "text": "Table 5", "ref_id": null }, { "start": 546, "end": 553, "text": "Table 5", "ref_id": null }, { "start": 1109, "end": 1116, "text": "Table 4", "ref_id": null }, { "start": 1121, "end": 1138, "text": "Figure 4a, 4b, 4c", "ref_id": "FIGREF4" }, { "start": 1335, "end": 1342, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Performance on Medical PCIO", "sec_num": "4.7" }, { "text": "The second example shows structural modifications. We can see that the supervised Open-NMT+PT simply deletes the complex terminolo-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "4.9" }, { "text": "Prostate cancer usually progresses slowly and rarely causes symptoms until advanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expertise input", "sec_num": null }, { "text": "Prostate cancer usually goes slowly and rarely causes symptoms until advanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OpenNMT+PT", "sec_num": null }, { "text": "Prostate cancer usually goes slowly and rarely causes symptoms until advanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UNTS", "sec_num": null }, { "text": "Prostate cancer usually progresses slowly and rarely causes symptoms until advanced. DeleteAndRetrieve prostate cancer usually begins to develop until symptoms appear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ControlledGen", "sec_num": null }, { "text": "Prostate cancer usually progresses slowly and rarely causes symptoms until advanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "StyleTransformer", "sec_num": null }, { "text": "Prostate cancer usually causes no symptoms until it reaches an advanced stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Laymen Gold", "sec_num": null }, { "text": "Cystic lung disease and recurrent spontaneous pneumothorax may occur. These disorders can cause pain and shortness of breath.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expertise input", "sec_num": null }, { "text": "Cystic lung disease can cause pain and shortness of breath. UNTS lung lung disease and roughly something pneumothorax may occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OpenNMT+PT", "sec_num": null }, { "text": "Cystic lung disease and recurrent spontaneous pneumothorax may occur. These disorders can cause pain and shortness of breath. DeleteAndRetrieve ear skin disease in the lungs and the lungs may occur in other disorders and may cause chest pain and shortness of breath.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ControlledGen", "sec_num": null }, { "text": "Cystic lung disease and exposed spontaneous pneumothorax may occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "StyleTransformer", "sec_num": null }, { "text": "Air-filled sacs (cysts) may develop in the lungs. The cysts may rupture, bringing air into the space that surrounds the lungs (pneumothorax). These disorders can cause pain and shortness of breath. gies recurrent spontaneous pneumothorax, but the output sentence can be deemed correct. Controlled-Gen still outputs the original input sentence, and the other three fail by either simply cutting the long sentence off, or changing the complex words randomly. Besides, all of the above models still perform much worse than human, which motivates research into better models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Laymen Gold", "sec_num": null }, { "text": "We have two observations from the aspects of model and evaluation. For models, there is a huge gap between all of the above models and human references. MSD is indeed challenging to conduct language modifications considering both knowledge and structures. Most of the time, these models basically output the original sentences without any modifications, or simply cut off the complex long sentence. Therefore, it is exciting to combine the techniques in TS, such as syntactic revisions including sentence splitting and lexical substitutions, with the techniques in ST: style and content disentanglement or the unsupervised idea of alleviating the lack of parallel training data. For evaluation, human checking is necessary in the current stage, even though SARI seems to offer a good start for automatic evaluation. Based on our observations, it is actually easy to fool the three ST metrics simultaneously via a trick: output sentences by adding style-related words before the original inputs. This is demonstrated by a variant of ControlledGen. We incorporate into the generator an extra knowledge encoder, which encodes the definition of concepts in each sentence (as mentioned in Section 3.1). Surprisingly, such a simple model achieves a very high style accuracy (over 90%) and good BLEU scores (around 20). But the model does not succeed in the style transfer task, and simply learns to add the word doctors into layman sentences while almost keeping the other words unchanged; and adding the word eg into the expertise sentences. Thus, it achieves good performance on all of the three ST measures, but makes little useful modifications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.10" }, { "text": "We proposed a practical task of expertise style transfer and constructed a high-quality dataset, MSD. It is of high quality and also challenging due to the presence of knowledge gap and the need of structural modifications. We established benchmark performance of five SOTA models. The results shown a significant gap between machine and human performance. Our further discussion analyzed the challenges of existing metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In the future, we are interested in injecting knowledge into text representation learning (Cao et al., 2017 (Cao et al., , 2018b for deeply understanding expert language, and will help to generate knowledgeenhanced questions (Pan et al., 2019) for laymen.", "cite_spans": [ { "start": 90, "end": 107, "text": "(Cao et al., 2017", "ref_id": "BIBREF6" }, { "start": 108, "end": 128, "text": "(Cao et al., , 2018b", "ref_id": "BIBREF4" }, { "start": 225, "end": 243, "text": "(Pan et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://en.wikipedia.org/wiki/The_ Merck_Manuals", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://simple.wikipedia.org/wiki/ Main_Page", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.msdmanuals.com/ 4 The testing size is consistent with other ST datasets, and the rest of groups will be annotated for a larger dataset in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We only report TS models for expertise to laymen language, since they do not claim the opposite direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/senisioi/ NeuralTextSimplification/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/cocoxu/ simplification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating neural text simplification in the medical domain", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Den Bercken", "suffix": "" }, { "first": "Robert-Jan", "middle": [], "last": "Sips", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Lofi", "suffix": "" } ], "year": 2019, "venue": "WWW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens Van den Bercken, Robert-Jan Sips, and Christoph Lofi. 2019. Evaluating neural text sim- plification in the medical domain. In WWW.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The unified medical language system (umls): integrating biomedical terminology", "authors": [ { "first": "Olivier", "middle": [], "last": "Bodenreider", "suffix": "" } ], "year": 2004, "venue": "Nucleic acids research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Bodenreider. 2004. The unified medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The curse of knowledge in economic settings: An experimental analysis", "authors": [ { "first": "Colin", "middle": [], "last": "Camerer", "suffix": "" }, { "first": "George", "middle": [], "last": "Loewenstein", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Weber", "suffix": "" } ], "year": 1989, "venue": "Journal of political Economy", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Camerer, George Loewenstein, and Martin We- ber. 1989. The curse of knowledge in economic set- tings: An experimental analysis. Journal of political Economy.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural collective entity linking", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018a. Neural collective entity linking. In COLING.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Joint representation learning of cross-lingual words and entities via attentive distant supervision", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chengjiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tiansi", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, and Tiansi Dong. 2018b. Joint representation learning of cross-lingual words and entities via attentive distant supervision. In EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Low-resource name tagging learned with weakly labeled data", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Zikun", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Tat-Seng Chua", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Zikun Hu, Tat-seng Chua, Zhiyuan Liu, and Heng Ji. 2019. Low-resource name tagging learned with weakly labeled data. In EMNLP.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bridge text and knowledge by learning multi-prototype entity mention embedding", "authors": [ { "first": "Yixin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017. Bridge text and knowledge by learning multi-prototype entity mention embedding. In ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Simplifying text for language-impaired readers", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Guido", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "Darren", "middle": [], "last": "Pearce", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Canning", "suffix": "" }, { "first": "Siobhan", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "John", "middle": [], "last": "Tait", "suffix": "" } ], "year": 1999, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Sim- plifying text for language-impaired readers. In EACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic induction of rules for text simplification", "authors": [ { "first": "Raman", "middle": [], "last": "Chandrasekar", "suffix": "" }, { "first": "Bangalore", "middle": [], "last": "Srinivas", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification. KBS.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A computer readability formula designed for machine scoring", "authors": [ { "first": "Meri", "middle": [], "last": "Coleman", "suffix": "" }, { "first": "Ta", "middle": [ "Lin" ], "last": "Liau", "suffix": "" } ], "year": 1975, "venue": "Journal of Applied Psychology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning to simplify sentences using wikipedia", "authors": [ { "first": "William", "middle": [], "last": "Coster", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the workshop on monolingual text-to-text generation. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Coster and David Kauchak. 2011. Learning to simplify sentences using wikipedia. In Proceedings of the workshop on monolingual text-to-text genera- tion. ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Style transformer: Unpaired text style transfer without disentangled latent representation", "authors": [ { "first": "Ning", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Jianze", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Paraphrase acquisition from comparable medical corpora of specialized and lay texts", "authors": [ { "first": "Louise", "middle": [], "last": "Del\u00e9ger", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2008, "venue": "AMIA Annual Symposium Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Louise Del\u00e9ger and Pierre Zweigenbaum. 2008. Para- phrase acquisition from comparable medical corpora of specialized and lay texts. In AMIA Annual Sym- posium Proceedings.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Rethinking text attribute transfer: A lexical analysis", "authors": [ { "first": "Yao", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jiaze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao Fu, Hao Zhou, Jiaze Chen, and Lei Li. 2019. Re- thinking text attribute transfer: A lexical analysis. ArXiv.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Style transfer in text: Exploration and evaluation", "authors": [ { "first": "Zhenxin", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Xiaoye", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In AAAI.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Simplifying lexical simplification: Do we need simplified corpora", "authors": [ { "first": "Goran", "middle": [], "last": "Glavas", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glavas and Sanja Stajner. 2015. Simplifying lex- ical simplification: Do we need simplified corpora? In ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Technique of clear writing", "authors": [ { "first": "Robert", "middle": [], "last": "Gunning", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Gunning. 1968. Technique of clear writing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Toward controlled generation of text", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2017, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. In ICML.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Shakespearizing modern language using copy-enriched sequence-to-sequence models", "authors": [ { "first": "Harsh", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence-to-sequence models. EMNLP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Unsupervised text attribute transfer via iterative matching and translation", "authors": [ { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Santus", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, and Enrico Santus. 2019. Unsupervised text at- tribute transfer via iterative matching and translation. In EMNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Disentangled representation learning for non-parallel text style transfer", "authors": [ { "first": "Vineet", "middle": [], "last": "John", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. AAAI.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In EACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel", "authors": [ { "first": "Robert P Fishburne", "middle": [], "last": "Peter Kincaid", "suffix": "" }, { "first": "Richard", "middle": [ "L" ], "last": "Jr", "suffix": "" }, { "first": "Brad", "middle": [ "S" ], "last": "Rogers", "suffix": "" }, { "first": "", "middle": [], "last": "Chissom", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Autoencoding variational bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Max Welling. 2013. Auto- encoding variational bayes. ICLR.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Opennmt: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving consumer understanding of medical text: Development and validation of a new subsimplify algorithm to automatically generate term explanations in english and spanish", "authors": [ { "first": "Nicholas", "middle": [], "last": "Kloehn", "suffix": "" }, { "first": "Gondy", "middle": [], "last": "Leroy", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Sonia", "middle": [], "last": "Colina", "suffix": "" }, { "first": "Nicole", "middle": [ "P" ], "last": "Yuan", "suffix": "" }, { "first": "Debra", "middle": [], "last": "Revere", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas Kloehn, Gondy Leroy, David Kauchak, Yang Gu, Sonia Colina, Nicole P Yuan, and Debra Revere. 2018. Improving consumer understanding of medi- cal text: Development and validation of a new sub- simplify algorithm to automatically generate term explanations in english and spanish. JMIR.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multiple-attribute text rewriting", "authors": [], "year": null, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Multiple-attribute text rewriting. In ICLR.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" } ], "year": 2018, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In NAACL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Unsupervised lexical simplification for non-native speakers", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2016. Unsuper- vised lexical simplification for non-native speakers. In AAAI.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Recent advances in neural question generation", "authors": [ { "first": "Liangming", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Wenqiang", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.08949" ] }, "num": null, "urls": [], "raw_text": "Liangming Pan, Wenqiang Lei, Tat-Seng Chua, and Min-Yen Kan. 2019. Recent advances in neural question generation. arXiv preprint arXiv:1905.08949.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In ACL.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Style transfer through back-translation", "authors": [ { "first": "Yulia", "middle": [], "last": "Shrimai Prabhumoye", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W. Black. 2018. Style transfer through back-translation. In ACL.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer", "authors": [ { "first": "Sudha", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2018, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. In NAACL.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Fighting offensive language on social media with unsupervised text style transfer", "authors": [ { "first": "Cicero", "middle": [], "last": "Nogueira Dos Santos", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Melnyk", "suffix": "" }, { "first": "Inkit", "middle": [], "last": "Padhi", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In ACL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Controlling politeness in neural machine translation via side constraints", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In NAACL.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Neural text simplification of clinical letters with a domain specific phrase table", "authors": [ { "first": "Matthew", "middle": [], "last": "Shardlow", "suffix": "" }, { "first": "Raheel", "middle": [], "last": "Nawaz", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Shardlow and Raheel Nawaz. 2019. Neural text simplification of clinical letters with a domain specific phrase table. In ACL.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Style transfer from non-parallel text by cross-alignment", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Quickumls: a fast, unsupervised approach for medical concept extraction", "authors": [ { "first": "Luca", "middle": [], "last": "Soldaini", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2016, "venue": "MedIR workshop, sigir", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luca Soldaini and Nazli Goharian. 2016. Quickumls: a fast, unsupervised approach for medical concept extraction. In MedIR workshop, sigir.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Bleu is not suitable for the evaluation of text simplification", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. Bleu is not suitable for the evaluation of text simpli- fication. In EMNLP.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Simple and effective text simplification using semantic and neural methods", "authors": [ { "first": "Elior", "middle": [], "last": "Sulem", "suffix": "" }, { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Simple and effective text simplification using seman- tic and neural methods. In ACL.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Unsupervised neural text simplification", "authors": [ { "first": "Sai", "middle": [], "last": "Surya", "suffix": "" }, { "first": "Abhijit", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Anirban", "middle": [], "last": "Laha", "suffix": "" }, { "first": "Parag", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Sankaranarayanan", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain, and Karthik Sankaranarayanan. 2019. Unsupervised neural text simplification. In ACL.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Internet health information seeking and the patientphysician relationship: a systematic review", "authors": [ { "first": "Sharon", "middle": [], "last": "Swee-Lin Tan", "suffix": "" }, { "first": "Nadee", "middle": [], "last": "Goonawardene", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Swee-Lin Tan and Nadee Goonawardene. 2017. Internet health information seeking and the patient- physician relationship: a systematic review. JMIR.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Sentence simplification for semantic role labeling", "authors": [ { "first": "David", "middle": [], "last": "Vickrey", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2008, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Vickrey and Daphne Koller. 2008. Sentence sim- plification for semantic role labeling. In ACL.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Learning to simplify sentences with quasi-synchronous grammar and integer programming", "authors": [ { "first": "Kristian", "middle": [], "last": "Woodsend", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristian Woodsend and Mirella Lapata. 2011. Learn- ing to simplify sentences with quasi-synchronous grammar and integer programming. In EMNLP.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Sentence simplification by monolingual machine translation", "authors": [ { "first": "", "middle": [], "last": "Sander Wubben", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Bosch", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2012, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by mono- lingual machine translation. In ACL.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuancheng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xi- aodong Zhang, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cy- cled reinforcement learning approach. In ACL.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Problems in current text simplification research: New data can help", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification re- search: New data can help. TACL.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Quanze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. TACL.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Paraphrasing for style", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In COL- ING.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Sentence simplification with deep reinforcement learning", "authors": [ { "first": "Xingxing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In EMNLP.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Shaped: Shared-private encoder-decoder for text style adaptation", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2018, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Zhang, Nan Ding, and Radu Soricut. 2018a. Shaped: Shared-private encoder-decoder for text style adaptation. In NAACL.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Learning sentiment memories for sentiment modification without parallel data", "authors": [ { "first": "Yi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018b. Learning sentiment memories for sentiment modification without parallel data. In EMNLP.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Style transfer as unsupervised machine translation", "authors": [ { "first": "Zhirui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianyong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Enhong", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2019. Style transfer as unsupervised machine trans- lation. AAAI.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Adversarially regularized autoencoders", "authors": [ { "first": "Jake", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Kelly", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2018, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M Rush, and Yann LeCun. 2018. Adversarially regu- larized autoencoders. In ICML.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "A monolingual tree-based translation model for sentence simplification", "authors": [ { "first": "Zhemin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Delphine", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2010, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In COLING.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Examples of Expert Style Transfer. The upper sentences are in expert style while the lower ones are in laymen style. We highlight the knowledge based differences with red bolded font.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Distribution of dataset based on topics", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Distribution of testing set based on PCIO.", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "(a) Impact of concepts.(b) Impact of structure differences. (c) Performance on different PCIO.", "num": null, "uris": null }, "FIGREF4": { "type_str": "figure", "text": "Curves of BLEU and style accuracy, where the x-axis denotes the number of concepts per sentence, edit distance between parallel sentences, and different PCIO elements, respectively.", "num": null, "uris": null }, "TABREF0": { "type_str": "table", "content": "", "text": "Pleural Effusion, Symptoms Expert Many cause dyspnea [C0013404], pleuritic chest pain [C0008033], or both. Laymen The most common symptoms, regardless of the type of fluid in the pleural space or its cause, are shortness of breath [C2707305;C3274920] and chest pain [C0008031;C2926613].", "num": null, "html": null }, "TABREF1": { "type_str": "table", "content": "
", "text": "Examples of parallel annotation in MSD, where the red fonts in brackets denote UMLS concepts.", "num": null, "html": null }, "TABREF2": { "type_str": "table", "content": "
MSD TrainMSD TestSimpWiki
MetricExpert Layman Ratio Expert Layman Ratio Expert Layman Ratio
#Annotation00-675675-2,2672,267-
#Sentence 130,349 114,674-9301,0471.132,3262,3070.99
#Vocabulary60,62737,3480.624,1173,3500.8110,4118,8230.85
#Concept Vocabulary24,15315,0600.621,8651,5200.812,8992,4580.85
FleshKincaid12.619.970.7912.059.530.7912.109.630.80
Gunning18.4315.290.8317.8915.070.8417.6614.860.84
Coleman12.6610.410.8212.269.740.7910.899.700.89
Avg. Readability14.5711.890.8114.0711.450.8113.5511.400.84
", "text": ", the", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
: BLEU (4-gram) and edit distance (ED ) scores
between parallel sentences. Concept words are masked
for ED computation
", "text": "", "num": null, "html": null }, "TABREF7": { "type_str": "table", "content": "", "text": "Examples of model outputs. Red/blue words with underlines highlight model/expected modifications.", "num": null, "html": null } } } }