|
{ |
|
"paper_id": "N19-1049", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:57:52.032847Z" |
|
}, |
|
"title": "Evaluating Style Transfer for Text", |
|
"authors": [ |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Mir", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Massachusetts Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "rmir@mit.edu" |
|
}, |
|
{ |
|
"first": "Bjarke", |
|
"middle": [], |
|
"last": "Felbo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Massachusetts Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "bfelbo@mit.edu" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Obradovich", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Massachusetts Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Iyad", |
|
"middle": [], |
|
"last": "Rahwan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Massachusetts Institute of Technology", |
|
"location": {} |
|
}, |
|
"email": "irahwan@mit.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Research in the area of style transfer for text is currently bottlenecked by a lack of standard evaluation practices. This paper aims to alleviate this issue by experimentally identifying best practices with a Yelp sentiment dataset. We specify three aspects of interest (style transfer intensity, content preservation, and naturalness) and show how to obtain more reliable measures of them from human evaluation than in previous work. We propose a set of metrics for automated evaluation and demonstrate that they are more strongly correlated and in agreement with human judgment: direction-corrected Earth Mover's Distance, Word Mover's Distance on style-masked texts, and adversarial classification for the respective aspects. We also show that the three examined models exhibit tradeoffs between aspects of interest, demonstrating the importance of evaluating style transfer models at specific points of their tradeoff plots. We release software with our evaluation metrics to facilitate research.", |
|
"pdf_parse": { |
|
"paper_id": "N19-1049", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Research in the area of style transfer for text is currently bottlenecked by a lack of standard evaluation practices. This paper aims to alleviate this issue by experimentally identifying best practices with a Yelp sentiment dataset. We specify three aspects of interest (style transfer intensity, content preservation, and naturalness) and show how to obtain more reliable measures of them from human evaluation than in previous work. We propose a set of metrics for automated evaluation and demonstrate that they are more strongly correlated and in agreement with human judgment: direction-corrected Earth Mover's Distance, Word Mover's Distance on style-masked texts, and adversarial classification for the respective aspects. We also show that the three examined models exhibit tradeoffs between aspects of interest, demonstrating the importance of evaluating style transfer models at specific points of their tradeoff plots. We release software with our evaluation metrics to facilitate research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Style transfer in text is the task of changing an attribute (style) of an input, while retaining nonattribute related content (referred to simply as content for brevity in this paper). 1 For instance, previous work has modified text to make it more positive (Shen et al., 2017) , romantic (Li et al., 2018) , or politically slanted (Prabhumoye et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 277, |
|
"text": "(Shen et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 306, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 357, |
|
"text": "(Prabhumoye et al., 2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Some style transfer models enable modifications by manipulating latent representations of the text (Shen et al., 2017; Fu et al., 2018) , while others identify and replace stylerelated words directly (Li et al., 2018) . Regardless of approach, they are hard to compare as there is 1 This definition of style transfer makes a simplifying assumption that \"style\" words can be distinguished from \"content\" words, or words carrying relatively less or no stylistic weight, such as \"caf\u00e8\" in \"What a nice caf\u00e8.\" The definition is motivated by penalizing unnecessary changes to content words, e.g. \"What a nice caf\u00e8\" to \"This is an awful caf\u00e8.\" currently neither a standard set of evaluation practices, nor a clear definition of which exact aspects to evaluate. In Section 2, we define three key aspects to consider. In Section 3, we summarize issues with previously used metrics. Many rely on human ratings, which can be expensive and timeconsuming to obtain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 118, |
|
"text": "(Shen et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 135, |
|
"text": "Fu et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 217, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 282, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To address these issues, in Section 4, we consider how to obtain more reliable measures of human judgment for aspects of interest, and automated methods more strongly correlated with human judgment than previously used methods. Lastly, in Section 5, we show that the three examined models exhibit aspect tradeoffs, highlighting the importance of evaluating style transfer models at specific points of their tradeoff plots. We release software with our evaluation metrics at https://github.com/passeul/ style-transfer-model-evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We consider three aspects of interest on which to evaluate output text x of a style transfer model, potentially with respect to input text x:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Aspects of Evaluation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. style transfer intensity ST I(SC(x), SC(x )) quantifies the difference in style, where SC(\u2022) maps an input to a style distribution 2. content preservation CP (x, x ) quantifies the similarity in content between the input and the output 3. naturalness N T (x ) quantifies the degree to which the output appears as if it could have been written by humans Style transfer models should be compared across all three aspects to properly characterize differences. For instance, if a model transfers from negative to positive sentiment, but alters content such as place names, it preserves content poorly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Aspects of Evaluation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Content Preservation Naturalness HRC(x ) HRD(x ) SC(x ) HRC(x,x ) HRR(x,{x }) BLEU(x,x ) HRC(x ) PPL (x ) CAAE x x x x F ARAE x x x x x x F DAR x x x x x G Table 1 : Summary of past evaluation techniques. HRC is human rating on a continuous scale (e.g. 1 to 5). HRD is on discrete options (e.g. positive/negative). HRR is human ranking (most to least similar) of outputs, with respect to given input x. {x } is the set of x from models trained on different parameters. SC is a style classifier. PPL is perplexity. Superscripts denote that evaluation is done for fluency (F) or grammar (G), which we consider subsets of naturalness. Readers can see the original papers for details on methods falling under these techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 183, |
|
"text": "(x ) CAAE x x x x F ARAE x x x x x x F DAR x x x x x G Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Style Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If it preserves content well, but sequentially repeats words such as \"the\", the output is unnatural. Conversely, a model that overemphasizes text reconstruction would yield high content preservation and possibly high naturalness, but little to no style transfer. All three aspects are thus critical to analyze in a system of style transfer evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Style Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We review previously used approaches for evaluating the outputs of style transfer models. Due to the high costs related to obtaining human evaluations, we focus on three models: the crossaligned autoencoder (CAAE), adversarially regularized autoencoder (ARAE), and delete-andretrieve (DAR) models (Shen et al., 2017; Li et al., 2018) . Table 1 illustrates the spread of evaluation practices in these papers using our notation from Section 2, showing that they all rely on a different combination of human and automated evaluation. For human evaluation, the papers use different instruction sets and scales, making it difficult to compare scores. Below we describe the automated metrics used for each aspect. Some rely on training external models on the corpus of input texts, X, and/or the corpus of output texts, X . We encourage readers seeking details on how to compute the metrics to reference the algorithms in the original papers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 316, |
|
"text": "(Shen et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 333, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 343, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Style Transfer Previous work has trained classifiers on X and corresponding style labels, and measured the number of outputs classified as having a target style (Shen et al., 2017; Li et al., 2018) . Results from this target style scoring approach may not be directly comparable across papers due to different classifiers used in evaluations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 180, |
|
"text": "(Shen et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 197, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To evaluate content preservation between x and x , previous work has used BLEU Li et al., 2018) , an n-gram based metric originally designed to evaluate machine translation models (Papineni et al., 2002) . BLEU does not take into account the aim of style transfer models, which is to alter style by necessarily changing words. Intended differences between x and x are thus penalized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 95, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 203, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Naturalness Past evaluations of naturalness have relied largely on human ratings on a variety of scales under different names: grammaticality, fluency/readability, and naturalness itself (Table 1). An issue with measuring grammaticality is that text with proper syntax can still be semantically nonsensical, e.g. \"Colorless green ideas sleep furiously\" (Chomsky, 1957) . Furthermore, input texts may not demonstrate perfect grammaticality or readability, despite being written by humans and thus being natural by definition (Section 2). This undermines the effectiveness of measures for such specific qualities of output texts. used perplexity to evaluate fluency, which, like grammaticality, we consider a subset of naturalness itself. Low perplexity signifies less uncertainty over which words can be used to continue a sequence, quantifying the ability of a language model to predict gold or reference texts (Brown et al., 1992; Young et al., 2006) . However, style transfer outputs are not necessarily gold standard, and the correlation between perplexity and human judgments of those outputs is unknown in the style transfer setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 368, |
|
"text": "(Chomsky, 1957)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 911, |
|
"end": 931, |
|
"text": "(Brown et al., 1992;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 932, |
|
"end": 951, |
|
"text": "Young et al., 2006)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We describe how to construct a style lexicon for use in human and automated evaluations. We also describe best practices that we recommend for obtaining scores of those evaluations, as well as how they can be used for evaluating other datasets. Please refer to Section 5 for experimental results. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Because the process of style transfer may result in the substitution or removal of more stylistically weighted words, it is ideal to have a lexicon of style-related words to reference. Words in x and/or x that also appear in the lexicon can be ignored in evaluations of content preservation. While building a new style lexicon or an extension of existing ones like WordNet-Affect (Strapparava and Valitutti, 2004) may be feasible with binary sentiment as the style, it may not be scalable to manually do so for various other types of styles. Static lexica also might not take context into account. This is an issue for text with words or phrases that are ambiguous in terms of stylistic weight, e.g. \"dog\" in \"That is a man with a dog\" vs. \"That man is a dog.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Construction of Style Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "It is more appropriate to automate the construction of a style lexicon per dataset of interest. While multiple options may exist for doing so, we emphasize the simplicity and replicability of training a logistic regression classifier on X and corresponding style labels. We populate the lexicon with features having the highest absolute weights, as those have the most impact on the outcome of the style labels. (Table 2 shows sample words in the lexicon constructed for the dataset used in our experiments.) While sentiment datasets have been widely used in the literature (Shen et al., 2017; Li et al., 2018) , a lexicon can be constructed for other datasets in the same manner, as long as the dataset has style labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 574, |
|
"end": 593, |
|
"text": "(Shen et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 610, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 420, |
|
"text": "(Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Construction of Style Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given existing NLP techniques, it may not be possible to correctly identify all style-related words in a text. Consequently, there is a tradeoff between identifying more style-related words and incorrectly marking some other (content) words as style-related. We opt for higher precision and lower recall to minimize the risk of removing content words, which are essential to evaluations of content preservation. This issue is not critical because researchers can compare their style transfer methods using our lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Construction of Style Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As seen in Table 1 , past evaluations of both style transfer and naturalness consider only output text x . Existing work from other fields have, however, shown that asking human raters to evaluate two relative comparisons provides more accurate scores than asking them to provide a numerical score for a single observation (Stewart et al., 2005; Bijmolt and Wedel, 1995) . With this knowledge, we construct more reliable ways of obtaining human evaluations via relative scoring instead of absolute scoring.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 345, |
|
"text": "(Stewart et al., 2005;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 370, |
|
"text": "Bijmolt and Wedel, 1995)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 18, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Style Transfer Intensity Past evaluations have raters mark the degree to which x exhibits a target style (Li et al., 2018) . We instead ask raters to score the difference in style between x and x , on a scale of 1 (identical styles) to 5 (completely different styles). This approach can also used for non-binary cases. Consider text modeled as a distribution over multiple emotions (e.g. happy, sad, scared, etc.), where each emotion can be thought of as a style. One task could be to make a scared text more happy. Presented with x and x , raters would still rate the degree to which they differ in style.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 122, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We consider the difficulty of asking raters to ignore style-related words as done in (Shen et al., 2017) . Because not all raters may identify the same words as stylistic, their evaluations may vary substantially from one another. To account for this, we ask raters to evaluate content preservation on the same texts, but where we have masked style words using our style lexicon. Under this new \"masking\" approach, raters have a simpler task, as they are no longer responsible for taking style into account when they rate the similarity of two texts on a scale of 1 to 5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 104, |
|
"text": "(Shen et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Naturalness We ask raters to determine whether x or x (they are not told which is which) is more natural. An x marked as more natural indicates some success on the part of the style transfer model, as it is able to fool the rater. This is in contrast to previous work, where raters score the naturalness of x on a continuous scale without taking x into account at all, even though x serves as the basis for comparison of what is considered natural.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we describe our approaches to automating the evaluation of each aspect of interest. Style Transfer Intensity Rather than count how many output texts achieve a target style, we can capture more nuanced differences between the style distributions of x and x , using Earth Mover's Distance (Rubner et al., 1998; Pele and Werman, 2009) . EM D(SC(x), SC(x )) is the minimum \"cost\" to turn one distribution into the other, or how \"intense\" the transfer is. Distributions can have any number of values (styles), so EMD handles binary and non-binary datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 325, |
|
"text": "(Rubner et al., 1998;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 348, |
|
"text": "Pele and Werman, 2009)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Note that even if argmax(SC(x )) is not the target style class, EM D still acknowledges movement towards the target style with respect to SC(x). However, we penalize (negate) the score if SC(x ) displays a relative change of style in the wrong direction, away from the target style.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Depending on x, not a lot of rewriting may be necessary to achieve a different style. This is not an issue, as ST I relies on a style classifier to quantify not the difference between the content of x and x , but their style distributions. For the style classifier, we experiment with textcnn (Kim, 2014; Lee, 2018) and fastText (Joulin et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 304, |
|
"text": "(Kim, 2014;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 315, |
|
"text": "Lee, 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 350, |
|
"text": "(Joulin et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We first subject texts to different settings of modification: style removal and style masking. This is to address undesired penalization of metrics on texts expected to demonstrate changes after style transfer (Section 3). For style removal, we remove style words from x and x using the style lexicon. For masking, we replace those words with a customstyle placeholder. Table 3 exemplifies these modifications.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 377, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For measuring the degree of content preservation, in addition to the widely used BLEU, we consider METEOR and embedding-based metrics. METEOR is an n-gram based metric like BLEU, but handles sentence-level scoring more robustly, allowing it to be both a sentence-level and corpuslevel metric (Banerjee and Lavie, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 318, |
|
"text": "(Banerjee and Lavie, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the embedding-based metrics, word embeddings can be obtained with methods like Word2Vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) . Sentence-level embeddings can be comprised of the most extreme values of word embeddings per dimension (vector extrema) (Forgues et al., 2014) , or word embedding averages (Sharma et al., 2017) . Word Mover's Distance (WMD), based on EM D, calculates the minimum \"distance\" between word embeddings of x and of x , where smaller distances signify higher similarity (Kusner et al., 2015) . Greedy matching greedily matches words in x and x based on their embeddings, calculates their similarity (e.g. cosine similarity), and averages all the similarities. It repeats the process in the reverse direction and takes the average of those two scores (Rus and Lintean, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 114, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 149, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 294, |
|
"text": "(Forgues et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 345, |
|
"text": "(Sharma et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 537, |
|
"text": "(Kusner et al., 2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 819, |
|
"text": "(Rus and Lintean, 2012)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluate with all these metrics to identify the one most strongly correlated with human judgment of content preservation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Naturalness For a baseline understanding of what is considered \"natural,\" any method used for automated evaluation of naturalness requires the human-sourced input texts. We train unigram and neural logistic regression classifiers (Bowman et al., 2016) on samples of X and X for each transfer model. Via adversarial evaluation, these classifiers must distinguish human-generated inputs from machine-generated outputs. The more natural an output is, the likelier it is to fool a classifier (Jurafsky and Martin, 2018) . We calculate agreement between each type of human evaluation (Section 4.2) and each classifier AC. Agreement is the ratio of instances where humans and AC rate a text as more natural than the other.", |
|
"cite_spans": [ |
|
{ |
|
"start": 488, |
|
"end": 515, |
|
"text": "(Jurafsky and Martin, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also train LSTM language models (Hochreiter and Schmidhuber, 1997) on X and compute sentence-level perplexities for each text in X in order to determine the relative effectiveness of adversarial classification as a metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Preservation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Due to high costs of human evaluation, we focus on CAAE, ARAE, and DAR models with transfer tasks based on samples from the Yelp binary sentiment dataset (Shen et al., 2017 highly recommend this place while living in tempe and management . CAAE (\u03c1 = 0.5) would highly recommend management on duty and staff on business . DAR (\u03b3 = 500)", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 172, |
|
"text": "(Shen et al., 2017", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "until management works on friendliness and is a great place for communication with residents . Table 4 : Sample outputs of a negative to positive sentiment style transfer task. Italicized words are style-related, according to a style lexicon. They can be masked or removed in evaluations of content preservation (Section 4.3).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 102, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "the range of parameters each model is trained on in order to compare evaluation practices and generate aspect tradeoff plots. Each of three Amazon Turk raters evaluated 244 texts per aspect, per model. Of those texts, half are originally of positive sentiment transferred to negative, and vice versa. For brevity, we reference average scores (correlation, kappa, and agreement, each of which is described below) from across all models in our analysis of results. For detailed scores per model, please refer to the corresponding tables.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For each style transfer model, we choose a wide range of training parameters to allow for variation of content preservation, and indirectly, of style transfer intensity, in X. We show sample outputs from the models for a given input text in Table 4. CAAE uses autoencoders (Vincent et al., 2008 ) that are cross-aligned, assuming that texts already share a latent content distribution (Shen et al., 2017) . It uses latent states of the RNN and multiple discriminators to align distributions of texts in X exhibiting one style with distributions of texts in X exhibiting another. Adversarial components help separate style information from the latent space where inputs are represented. We train CAAE on various values (0.01, 0.1, 0.5, 1, 5) of \u03c1, a weight on the adversarial loss.", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 294, |
|
"text": "(Vincent et al., 2008", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 404, |
|
"text": "(Shen et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 249, |
|
"text": "Table 4.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Style Transfer Models", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "CAAE is a baseline for other style transfer models, such as ARAE, which trains a separate decoder per style class . We train ARAE on various values (1, 5, 10) of \u03bb, which is also a weight on adversarial loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Style Transfer Models", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The third model that we evaluate, which also uses CAAE as a baseline, avoids adversarial methods in an approach called Delete-and-Retrieve (DAR) (Li et al., 2018) . It identifies and removes style words from texts, searches for related words pertaining to a new target style, and combines the de-stylized text with the search results using a neural model. We train DAR on \u03b3 = 15, where \u03b3 is a threshold parameter for the maximum number of style words that can be removed from texts, with respect to the size of the corpus vocabulary. For this single training value, we experiment with a range of \u03b3 values (0.1, 1, 15, 500) during test time because, by design, the model does not need to be retrained (Li et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 162, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 700, |
|
"end": 717, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Style Transfer Models", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We use Fleiss' kappa \u03ba of inter-rater reliability (see formula in L. Fleiss and Cohen, 1973) to identify the more effective human scoring task for different aspects of interest. The kappa metric is often levied in a relative fashion, as there are no universally accepted thresholds for agreements that are slight, fair, moderate, etc. For comprehensive experimentation, we compare kappas over the outputs of each style transfer model. The kappa score for ratings of content preservation based on style-masked texts is 0.297. Given the kappa score of 0.173 for unmasked texts, style masking is a more reliable approach towards human evaluation for content preservation (Table 5) . For style transfer intensity, kappas for relative scoring do not show improvement over the previously used approach of absolute scoring of x . However, we observe the opposite for the aspect of naturalness. Kappas for relative naturalness scoring tasks exceed those of the absolute scoring ones (Table 6 ). Despite the two types of tasks having Table 9 : Absolute correlations of content preservation metrics with human scores on texts with style masking. different numbers of categories (2 vs 5), we can compare them by using a threshold \u03c4 to bin the absolute score for each text into a \"natural\" group (x is considered to be more natural than x) or \"unnatural\" one (vice versa), like in relative scoring. For example, \u03c4 = 2 places texts with absolute scores greater than or equal to 2 into the natural group. Judgments for relative tasks yield greater inter-rater reliability than those of absolute tasks across multiple thresholds (\u03c4 \u2208 {2, 3}). This suggests that the relative scoring paradigm is preferable in human evaluations of naturalness.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 92, |
|
"text": "Fleiss and Cohen, 1973)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 668, |
|
"end": 677, |
|
"text": "(Table 5)", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 975, |
|
"end": 983, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 1025, |
|
"end": 1032, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Per aspect of interest, we compute Pearson correlations between scores from the existing metric and human judgments. (As there were three raters for any given scoring task, we take the average of their scores.) We do the same for our proposed metrics to identify which metric is more reliable for automated evaluation of a given aspect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For style transfer intensity, across both the fastText and textcnn classifiers, our proposed direction-corrected Earth Mover's Distance metric has higher correlation with human scores than the past approach of target style scoring (Table 7) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 240, |
|
"text": "(Table 7)", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For content preservation, METEOR, shown to have higher correlation with human judgments Table 10 : Percent agreement between adversarial classifiers and human scores on the naturalness of texts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 96, |
|
"text": "Table 10", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "than BLEU for machine translation (Banerjee and Lavie, 2005) , shows the same relationship for style transfer. However, across various text modification settings, WMD generally shows the strongest correlation with human scores (Tables 8 and 9 ). Because WMD is lower when texts are more similar, it is anti-correlated with human scores. We take absolute correlations to facilitate comparison with other content preservation metrics. With respect to text modification, style masking may be more suitable as it, on average for WMD, exhibits a higher correlation with human judgments. For naturalness, both unigram and neural classifiers exhibit greater agreement on which texts are considered more natural with the humans given relative scoring tasks than with those given absolute scoring tasks (Table 10) , although the neural classifier achieves higher agreements on average. We also confirm that sentence-level perplexity is not an appropriate metric. It exhibits no significant correlation with human scores (\u03b1 = 0.05). These results suggest that adversarial classifiers can be useful for automating measurement of naturalness.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 60, |
|
"text": "(Banerjee and Lavie, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 242, |
|
"text": "(Tables 8 and 9", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 794, |
|
"end": 804, |
|
"text": "(Table 10)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Previous work has compared models with respect to a single aspect of interest at a time, but has only, to a limited degree, considered how relationships between multiple aspects influence these comparisons. In particular, concurrent work by (Li et al., 2018) examines tradeoff plots, but focuses primarily on variants of its own model, while including only a single point on the plots of style transfer models from other papers. For a comprehensive comparison, it is ideal to have plots for all models. It is helpful to first understand the tradeoff space. For example, we define extreme cases for style transfer intensity and content preservation, where we assume measurement of the latter ignores stylistic content. Consider two classes of suboptimal models. One class produces outputs with a wide range of style transfer intensity, but poor content preservation (Figure 1a) . The other class of models produces outputs with low style transfer intensity, but a wide range of content preservation (Figure 1b) . This is in contrast to a model that yields a wide range of style transfer intensity and consistently high content preservation (Figure 1c ). If we take that to be an ideal model for a sentiment dataset, we can interpret models with better performance to be the ones whose tradeoff plots are closer to that of the ideal model and farther from those of the suboptimal ones. The plot for an ideal model will likely vary by dataset, especially because the tradeoff between content preservation and style transfer intensity depends on the level of distinction between style words and content words of the dataset. With this interpretation of the tradeoff space, we construct a plot for each style transfer model (Figure 2) , where each point represents a different hyperparameter setting for training (Section 5.1). We collect scores based on the automated metrics most strongly correlated with human judgment: direction-corrected EMD for style transfer intensity, WMD for content preservation, and percent of output texts marked by an adversarial classifier as more natural than input texts. Because WMD scores are lower when texts are more similar, we instead take the normalized inverses of the scores to represent the degree of content preservation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 258, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 865, |
|
"end": 876, |
|
"text": "(Figure 1a)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1009, |
|
"text": "(Figure 1b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1139, |
|
"end": 1149, |
|
"text": "(Figure 1c", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1719, |
|
"end": 1729, |
|
"text": "(Figure 2)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Aspect Tradeoffs", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Across all models, there is a trend of reduction in content preservation and naturalness as style transfer intensity increases. Without the plots, one might conclude that ARAE and DAR perform substantially differently, especially if hyperparameters are chosen such that ARAE achieves the leftmost point on its plot and DAR achieves the rightmost point on its plot. With the plots, at least for the set of hyperparameters considered, it is evident that they perform comparably (Figure 2a ) and do not exhibit the same level of decrease in naturalness as CAAE (Figure 2b ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 486, |
|
"text": "(Figure 2a", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 568, |
|
"text": "(Figure 2b", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Aspect Tradeoffs", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Previous work on style transfer models used a variety of evaluation methods (Table 1) , making it difficult to meaningfully compare results across papers. Moreover, it is not clear from existing research how exactly to define particular aspects of interest, or which methods (whether human or automated) are most suitable for evaluating and comparing different style transfer models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 85, |
|
"text": "(Table 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To address these issues, we specified key aspects of interest (style transfer intensity, content preservation, and naturalness) and showed how to obtain more reliable measures of them from human evaluation than in previous work. Our proposed automated metrics (direction-corrected EMD, WMD on style-masked texts, and adversarial classification) exhibited stronger correlations with human scores than existing automated metrics on a binary sentiment dataset. While human evaluation may still be useful in future research, automation facilitates evaluation when it is infeasible to collect human scores due to prohibitive cost or limited time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For style transfer intensity, the relative scoring task (rating the degree of stylistic difference between x and x ) did not have greater rater reliability than the previously used task of rating output texts on an absolute scale. This is likely due to task complexity or rater uncertainty, which motivates the need for further exploration of task design for this particular aspect of interest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For content preservation, our form of human evaluation operates on texts whose style words are masked out, unlike the previous approach (no masking). Our approach addresses the unintentional variable of rater-dependent style identification that could lead to noisy, less reliable ratings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Identification and masking of words was made possible with a style lexicon. We automatically constructed the lexicon in a way that can be done for any style dataset, as long as style labels are available (Section 4.1). We acknowledge a tradeoff between filling the lexicon with more style words and being conservative in order to avoid capturing content words. We justify taking a more conservative approach as content words are naturally critical to evaluations of content preservation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For naturalness, we introduced a paradigm of relative scoring that uses both the output and input texts. This achieved a higher inter-rater reliability than did absolute scoring, the previous approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For style transfer intensity, we proposed using a metric with EMD as the basis to acknowledge the spectrum of styles that can appear in outputs and to handle both binary and non-binary datasets. The metric also accounts for direction by penalizing scores in the cases where the style distribution of the output text explicitly moves away from the target style. Previous work used external classifiers, whose style distributions for x and x can be used to calculate direction-corrected EMD, making it a simple addition to the evaluation workflow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For content preservation, WMD (based on EMD) works in a similar fashion, but with word embeddings of x and of x . BLEU, used widely in previous work, may yield weaker correlations with human judgment in comparison as it was designed to have multiple reference texts per candidate text (Papineni et al., 2002) . Several reference texts, which are more common in machine translation tasks, increase the chance of n-gram (such as n \u2265 3) overlap with the candidate. In the style transfer setting, however, the only reference text for x is x. Having a single reference text reduces the likelihood of overlap and the overall effectiveness of BLEU.", |
|
"cite_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 308, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For naturalness, strong agreement of adversarial classifiers with relative scores assigned by humans suggest that classifiers are suitable for automated evaluation. One might assume input texts would almost always be rated as more natural by both humans and classifiers, biasing the agreement. This is not the case, as we justify our rating scheme with evidence of outputs being rated as more natural across several models (Figure 2b ). Output texts classified as more natural indicate some success for a style transfer model, as it can produce texts with a quality like that of human-generated inputs, which are, by definition, natural.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 433, |
|
"text": "(Figure 2b", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Finally, with aspect tradeoff plots constructed using scores from the automated metrics, we can directly compare models with respect to multiple aspects simultaneously. Points of intersection, or near intersection, for different models signify that they, at the hyperparameters that yielded those points, can achieve similar results for various aspects. These parameters can be useful for understanding the impact of decisions made during model design and optimization phases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automated Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "As we confirmed, sentence-level perplexity of output x is not meaningful by itself for the automated evaluation of naturalness. The idea of using both x and x , akin to how we train automated classifiers of naturalness (Section 4.3), can be extended to construct a perplexity-based metric that also takes into account the perplexity of input x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Research", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Another avenue for future work could be evaluating on datasets with a different style or number of style classes. It is worth studying the distinction between style words and content words in the vocabulary of each such dataset. Given the definition of style transfer and its simplifying assumption in Section 1, it would be reasonable to expect naturally low content preservation scores for any given style transfer model operating on datasets with less distinction, such as those of formality. This is not so much an issue as it is a datasetspecific trend that can be visualized in corresponding tradeoff plots, which would provide a holistic evaluation of model performance. In any case, results from inter-rater reliability and correlation testing on these additional datasets would overall enable more consistent evaluation practices and further progress in style transfer research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Research", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Like most literature, including the papers on CAAE, ARAE and DAR, we focus on the binary case. Creating a high-quality, multi-label style transfer dataset for evaluation is a demanding task, which is out of scope for this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Juncen Li, Tianxiao Shen, and Junbo (Jake) Zhao for guidance in the use of their respective style transfer models. These models serve as markers of major progress in the area of style transfer research, without which this work would not have been possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", |
|
"authors": [ |
|
{ |
|
"first": "Satanjeev", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The effects of alternative methods of collecting similarity data for multidimensional scaling", |
|
"authors": [ |
|
{ |
|
"first": "Tammo", |
|
"middle": [], |
|
"last": "Bijmolt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Wedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "International Journal of Research in Marketing", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "363--371", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tammo Bijmolt and Michel Wedel. 1995. The effects of alternative methods of collecting similarity data for multidimensional scaling. International Journal of Research in Marketing, 12(4):363-371.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Generating sentences from a continuous space", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vilnis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Jozefowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Confer- ence on Computational Natural Language Learning, pages 10-21. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An estimate of an upper bound for the entropy of english", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Robert L. Mer- cer, Stephen A. Della Pietra, and Jennifer C. Lai. 1992. An estimate of an upper bound for the entropy of english. Computational Linguistics, 18(1).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Syntactic Structures. Mouton and Co", |
|
"authors": [ |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Chomsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1957, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noam Chomsky. 1957. Syntactic Structures. Mouton and Co., The Hague.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bootstrapping dialog systems with word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Forgues", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Marie", |
|
"middle": [], |
|
"last": "Larchev\u00eaque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9al", |
|
"middle": [], |
|
"last": "Tremblay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Forgues, Joelle Pineau, Jean-Marie Larchev\u00eaque, and R\u00e9al Tremblay. 2014. Boot- strapping dialog systems with word embeddings.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Style transfer in text: Exploration and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Zhenxin", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoye", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "427--431", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Speech and Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Jurafsky and James H Martin. 2018. Speech and Language Processing, volume 3.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1746--1751", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1746-1751. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "From word embeddings to document distances", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Kusner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Kolkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on International Conference on Machine Learning", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "957--966", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd In- ternational Conference on International Conference on Machine Learning -Volume 37, pages 957-966.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "Educational and Psychological Measurement", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "613--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L. Fleiss and Jacob Cohen. 1973. The equiv- alence of weighted kappa and the intraclass corre- lation coefficient as measures of reliability. Edu- cational and Psychological Measurement, 33:613- 619.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", |
|
"authors": [ |
|
{ |
|
"first": "Juncen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1865--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1865-1874. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Bleu: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 311-318. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Fast and robust earth mover's distances", |
|
"authors": [ |
|
{ |
|
"first": "Ofir", |
|
"middle": [], |
|
"last": "Pele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Werman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "460--467", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ofir Pele and Michael Werman. 2009. Fast and robust earth mover's distances. pages 460-467. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Style transfer through back-translation", |
|
"authors": [ |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Shrimai Prabhumoye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "866--876", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 866-876. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A metric for distributions with applications to image databases", |
|
"authors": [ |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Rubner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Tomasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonidas", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Guibas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Sixth International Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "59--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. 1998. A metric for distributions with applications to image databases. In Sixth International Conference on Computer Vision, pages 59-66. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics", |
|
"authors": [ |
|
{ |
|
"first": "Vasile", |
|
"middle": [], |
|
"last": "Rus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Lintean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "157--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157- 162. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation", |
|
"authors": [ |
|
{ |
|
"first": "Shikhar", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Layla", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Asri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannes", |
|
"middle": [], |
|
"last": "Schulz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremie", |
|
"middle": [], |
|
"last": "Zumer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating nat- ural language generation. CoRR, abs/1706.09799.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Style transfer from non-parallel text by cross-alignment", |
|
"authors": [ |
|
{ |
|
"first": "Tianxiao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "6830--6841", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Informa- tion Processing Systems 30, pages 6830-6841.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Absolute identification by relative judgment", |
|
"authors": [ |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Psychological review", |
|
"volume": "112", |
|
"issue": "4", |
|
"pages": "881--911", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neil Stewart, Gordon DA Brown, and Nick Chater. 2005. Absolute identification by relative judgment. Psychological review, 112(4):881-911.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Wordnet affect: an affective extension of wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Valitutti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04). European Language Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carlo Strapparava and Alessandro Valitutti. 2004. Wordnet affect: an affective extension of word- net. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04). European Language Resources Associ- ation (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Extracting and composing robust features with denoising autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Antoine", |
|
"middle": [], |
|
"last": "Manzagol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1096--1103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- coders. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, pages 1096-1103. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The HTK Book Version 3.4", |
|
"authors": [ |
|
{ |
|
"first": "Steve", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kershaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Odell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ollason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Valtchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steve J. Young, D. Kershaw, J. Odell, D. Ollason, V. Valtchev, and P. Woodland. 2006. The HTK Book Version 3.4. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Adversarially regularized autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelly", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 35th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5902--5911", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018. Adversarially regular- ized autoencoders. In Proceedings of the 35th In- ternational Conference on Machine Learning, vol- ume 80 of Proceedings of Machine Learning Re- search, pages 5902-5911.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Extreme tradeoff plots, with style transfer intensity on the x-axis and content preservation on the y-axis.(a) Content vs. Style Tradeoffs (b) Naturalness vs. Style Tradeoffs" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Tradeoffs between aspects of evaluation, using metrics most strongly correlated with human scores." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Sample of words in a sentiment style lexicon.", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "No modificationInput: the girls up front incompetent . Output: the girls up front are amazing .Style removalInput: the girls up front . Output: the girls up front are .Style maskingInput: the girls up front customstyle . Output: the girls up front are customstyle .", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Text under different settings of style-based modification, as used in evaluations of content preservation. The sample output is from ARAE (\u03bb = 1).", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "). 2 Below we detail Input would n't recommend until management works on friendliness and communication with residents . ARAE (\u03bb = 1)", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Fleiss' kappas for human judgments of content preservation of unmasked and style-masked texts.", |
|
"content": "<table><tr><td>Model</td><td colspan=\"2\">Absolute \u03c4 = 3 \u03c4 = 2</td><td>Relative</td></tr><tr><td>CAAE</td><td>0.193</td><td>0.321</td><td>0.579</td></tr><tr><td>ARAE</td><td>0.215</td><td>0.415</td><td>0.741</td></tr><tr><td>DAR</td><td>0.103</td><td>0.201</td><td>0.259</td></tr><tr><td colspan=\"2\">Average 0.170</td><td>0.312</td><td>0.526</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Correlations of automated style transfer intensity metrics with human scores.", |
|
"content": "<table><tr><td>Model</td><td>BLEU</td><td>METEOR</td><td>Embed Average Greedy Match Vector Extrema</td><td>WMD</td></tr><tr><td>CAAE</td><td colspan=\"4\">0.458 \u00b1 0.044 0.498 \u00b1 0.042 0.370 \u00b1 0.048 0.489 \u00b1 0.043 0.496 \u00b1 0.042 0.496 \u00b1 0.042</td></tr><tr><td>ARAE</td><td colspan=\"4\">0.337 \u00b1 0.064 0.387 \u00b1 0.062 0.313 \u00b1 0.065 0.419 \u00b1 0.060 0.423 \u00b1 0.060 0.445 \u00b1 0.058</td></tr><tr><td>DAR</td><td colspan=\"4\">0.440 \u00b1 0.051 0.455 \u00b1 0.050 0.379 \u00b1 0.054 0.472 \u00b1 0.049 0.472 \u00b1 0.049 0.484 \u00b1 0.048</td></tr><tr><td colspan=\"5\">Average 0.412 \u00b1 0.053 0.447 \u00b1 0.051 0.354 \u00b1 0.056 0.460 \u00b1 0.051 0.464 \u00b1 0.050 0.475 \u00b1 0.049</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Absolute correlations of content preservation metrics with human scores on texts with style removal. Average 0.429 \u00b1 0.052 0.448 \u00b1 0.051 0.343 \u00b1 0.056 0.448 \u00b1 0.051 0.464 \u00b1 0.050 0.483 \u00b1 0.049", |
|
"content": "<table><tr><td>Model</td><td>BLEU</td><td>METEOR</td><td>Embed Average Greedy Match Vector Extrema</td><td>WMD</td></tr><tr><td>CAAE</td><td colspan=\"4\">0.488 \u00b1 0.043 0.517 \u00b1 0.041 0.356 \u00b1 0.049 0.490 \u00b1 0.043 0.496 \u00b1 0.042 0.517 \u00b1 0.041</td></tr><tr><td>ARAE</td><td colspan=\"4\">0.356 \u00b1 0.063 0.374 \u00b1 0.062 0.302 \u00b1 0.066 0.405 \u00b1 0.061 0.422 \u00b1 0.060 0.457 \u00b1 0.057</td></tr><tr><td>DAR</td><td colspan=\"4\">0.444 \u00b1 0.050 0.454 \u00b1 0.050 0.370 \u00b1 0.054 0.450 \u00b1 0.050 0.473 \u00b1 0.049 0.475 \u00b1 0.049</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |