{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:49.162469Z" }, "title": "Rich Syntactic and Semantic Information Helps Unsupervised Text Style Transfer", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "region": "IL", "country": "USA" } }, "email": "hgong6@illinois.edu" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "", "affiliation": {}, "email": "lfsong@tencent.com" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "region": "IL", "country": "USA" } }, "email": "spbhat2@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text style transfer aims to change an input sentence to an output sentence by changing its text style while preserving the content. Previous efforts on unsupervised text style transfer only use the surface features of words and sentences. As a result, the transferred sentences may either have inaccurate or missing information compared to the inputs. We address this issue by explicitly enriching the inputs via syntactic and semantic structures, from which richer features are then extracted to better capture the original information. Experiments on two text-style-transfer tasks show that our approach improves the content preservation of a strong unsupervised baseline model thereby demonstrating improved transfer performance.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Text style transfer aims to change an input sentence to an output sentence by changing its text style while preserving the content. Previous efforts on unsupervised text style transfer only use the surface features of words and sentences. As a result, the transferred sentences may either have inaccurate or missing information compared to the inputs. We address this issue by explicitly enriching the inputs via syntactic and semantic structures, from which richer features are then extracted to better capture the original information. Experiments on two text-style-transfer tasks show that our approach improves the content preservation of a strong unsupervised baseline model thereby demonstrating improved transfer performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text style transfer aims at rephrasing an input sentence as an output sentence in a target style (e.g. sentiment change from negative to positive), while preserving the original content. The utility of text style transfer has been shown in applications such as personalized response generation (Zhou et al., 2017; Niu and Bansal, 2018) and poetry generation (Yang et al., 2018a) . In particular, unsupervised style transfer has been extensively explored due to a lack of parallel corpora (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018b; John et al., 2019) .", "cite_spans": [ { "start": 294, "end": 313, "text": "(Zhou et al., 2017;", "ref_id": "BIBREF27" }, { "start": 314, "end": 335, "text": "Niu and Bansal, 2018)", "ref_id": "BIBREF17" }, { "start": 358, "end": 378, "text": "(Yang et al., 2018a)", "ref_id": "BIBREF24" }, { "start": 488, "end": 505, "text": "(Hu et al., 2017;", "ref_id": "BIBREF7" }, { "start": 506, "end": 524, "text": "Shen et al., 2017;", "ref_id": "BIBREF19" }, { "start": 525, "end": 544, "text": "Yang et al., 2018b;", "ref_id": "BIBREF25" }, { "start": 545, "end": 563, "text": "John et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most previous efforts on unsupervised text style transfer have relied on separating the content from the style of input texts. This was achieved via a transfer model with multiple decoders (Fu et al., 2018) or extra auxiliary losses (John et al., 2019) to learn the disentangled representation vectors for content and style respectively. The content vector was later combined with the vector of the desired style to produce the output.", "cite_spans": [ { "start": 189, "end": 206, "text": "(Fu et al., 2018)", "ref_id": "BIBREF2" }, { "start": 233, "end": 252, "text": "(John et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While these prior studies have successfully demonstrated the capability to adapt input texts to the desired style, the proposed approaches suffer from a significant loss of semantic content. For instance, when a model takes as input \"The lounge is very outdated\" to generate \"The food is delicious\", where the key information of the input (The lounge) is missing in the output, the rendering of the transfer becomes irrelevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To alleviate this problem, some studies sought to explicitly replace the words related to the stylistic aspect (e.g., sentiment) and retain other contentrelated words (Xu et al., 2018a; Li et al., 2018; Madaan et al., 2020 ). However, their success was limited to specific situations where the style words are explicit. When the negative sentiment is expressed implicitly, as in \"The only thing I was offered was a free dessert!!!\" this approach cannot have the desired effect.", "cite_spans": [ { "start": 167, "end": 185, "text": "(Xu et al., 2018a;", "ref_id": "BIBREF22" }, { "start": 186, "end": 202, "text": "Li et al., 2018;", "ref_id": "BIBREF11" }, { "start": 203, "end": 222, "text": "Madaan et al., 2020", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A second direction to address the semantic loss has been the use of back-translation (Prabhumoye et al., 2018; Lample et al., 2019; He et al., 2020) and/or reinforcement learning to preserve the input content (Gong et al., 2019; . Generally, these techniques involve a more complex model training, adding another layer of difficulty to obtain a strong style transfer model with robust performance.", "cite_spans": [ { "start": 85, "end": 110, "text": "(Prabhumoye et al., 2018;", "ref_id": "BIBREF18" }, { "start": 111, "end": 131, "text": "Lample et al., 2019;", "ref_id": "BIBREF10" }, { "start": 132, "end": 148, "text": "He et al., 2020)", "ref_id": "BIBREF6" }, { "start": 209, "end": 228, "text": "(Gong et al., 2019;", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "{My biggest complaint , however , is what happened with our meal . In this paper, we propose a new idea to preserve the semantics by highlighting the core information that should be preserved. This is performed during the input text encoding stage. As shown in Figure 1 , we propose to include two types of structures- semantic roles and dependency trees-to represent the core semantic and syntactic information. Semantic roles directly capture the key information in a sentence, such as its subject and object. In the example of Figure 1 , \"my biggest complaint\" is identified as the subject of \"is\". Meanwhile, a dependency tree captures finer-grained word-level relations. The relations \"poss\" and \"amod\" (possessive pronoun and adjectival modifier respectively) in this example reveal the syntactic structure within the phrase \"my biggest complaint\". As a different way of encoding the input, we consider a sentence, along with its dependency and semantic role annotations, as a graph. We then use a Graph Neural Network (GNN) (Marcheggiani and Titov, 2017) to encode the sentence, noting the reported success of GNNs in representing syntactic and semantic structures (Marcheggiani and Titov, 2017; Beck et al., 2018; Xu et al., 2018b) . For the overall architecture, the proposed GNN layers can be stacked onto (or replace) the encoder of an existing system. Previous advances on style transfer were achieved by new designs of the decoder or the learning framework (Shen et al., 2017; Fu et al., 2018) . Our approach can be considered to be orthogonal to these previous designs.", "cite_spans": [ { "start": 1031, "end": 1061, "text": "(Marcheggiani and Titov, 2017)", "ref_id": "BIBREF16" }, { "start": 1172, "end": 1202, "text": "(Marcheggiani and Titov, 2017;", "ref_id": "BIBREF16" }, { "start": 1203, "end": 1221, "text": "Beck et al., 2018;", "ref_id": "BIBREF1" }, { "start": 1222, "end": 1239, "text": "Xu et al., 2018b)", "ref_id": "BIBREF23" }, { "start": 1470, "end": 1489, "text": "(Shen et al., 2017;", "ref_id": "BIBREF19" }, { "start": 1490, "end": 1506, "text": "Fu et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 261, "end": 269, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 530, "end": 538, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Preliminary experiments with available text style transfer models on benchmark datasets validate the utility of our input-encoding approach for preserving input semantic information when compared with a strong baseline (Gong et al., 2019) without such an encoding. We include the details of our implementation in the supplementary material.", "cite_spans": [ { "start": 219, "end": 238, "text": "(Gong et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The baseline we consider for this study is a recent model (Gong et al., 2019) based on a generatordiscriminator framework. The generator transfers sentences from the source style to the target style and is in a tight feedback loop with a set of dis-criminators that evaluate the quality of the transferred sentences. The reported results revealed its competitive style transfer performance on available benchmark datasets. Generator. The generator is a typical seq2seq model with a sequence encoder and a sequence decoder (Bahdanau et al., 2015) . The encoder adopts a Gated Recurrent Unit (GRU) to take in an input sentence with words {x 1 , . . . , x N } and produce the encoder states {h 1 , ..., h N } sequentially. The sequence decoder is another GRU with the attention mechanism (Luong et al., 2015) . At each time step t, the decoder updates its hidden state s t with the target token generated at time t 1. It then predicts the current token y t using the current decoder state and the weighted sum of the encoder states, where the weights are produced by the attention mechanism. Discriminators. Three discriminators are included, each serving to judge one aspect of the quality of the generated target sentences from among meaning preservation, transfer strength, and fluency. The meaning preservation is evaluated using the word mover's distance (Kusner et al., 2015) , which calculates the similarity score r sem . The style discriminator predicts the likelihood r style of a generated sentence in the target style as the style quality. Moreover, a pre-trained neural language model evaluates the fluency by estimating the logprobability r lm of each generated sentence. The overall evaluation score that served as the feedback to the generator, r, was a weighted average of the three evaluation metrics to account for the fact that they may not be in the same scale.", "cite_spans": [ { "start": 58, "end": 77, "text": "(Gong et al., 2019)", "ref_id": "BIBREF4" }, { "start": 522, "end": 545, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 785, "end": 805, "text": "(Luong et al., 2015)", "ref_id": "BIBREF13" }, { "start": 1357, "end": 1378, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r = \u21b5r sem + r style + \u2318r lm ,", "eq_num": "(1)" } ], "section": "Baseline", "sec_num": "2" }, { "text": "where \u21b5, and are weighting coefficients. Reinforcement learning (RL). RL trains the generator with the feedback received from the discriminators (scores given by the discriminators to its generated sentences) (Gong et al., 2020) . Under the RL framework, generating a target sentence was formulated as making a sequence of actions, where an action is a token produced at a decoding step. Taking the decoding step t as an example, the decoder state s t contains the information of the input source sentence and the partial target sentence already generated by the model. An action a t is the generated token of the target sentence at step t, and a reward Q t is defined to reflect how good the action is. The reward Q(s t , a t ) of taking action a t in state s t was estimated by sampling complete target sentences with their first t 1 tokens fixed. With r(s \u2327 , a \u2327 ) as the average score over the sampled sentences with the first \u2327 tokens fixed, the reward was defined as", "cite_spans": [ { "start": 209, "end": 228, "text": "(Gong et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Q(s t , a t ) = T X \u2327 =t \u2327 t (r(s \u2327 , a \u2327 ) r(s \u2327 1 , a \u2327 1 )),", "eq_num": "(2)" } ], "section": "Baseline", "sec_num": "2" }, { "text": "where was a discounting factor set to 0.9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "2" }, { "text": "The generator was parameterized by \u2713 and denoted as G \u2713 . The total reward J(G \u2713 ) of generating a target sentence of T tokens was", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "2" }, { "text": "J(G \u2713 ) = T X t=1 X at2V p \u2713 (a t |s t )Q(s t , a t ), (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "2" }, { "text": "where p \u2713 (a t |x t ) is the probability for producing the token a t in state s t . Policy gradient was applied to update the generator parameters \u2713 with J(G \u2713 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "2" }, { "text": "We propose to enrich input sentences with syntactic and semantic structures, and to encode the resulting graphs using a Graph Neural Network (GNN). For a fair comparison with the baseline model, we replaced the GRU encoder of the baseline generator with our GNN encoder and kept the other modules unchanged. Our proposed model is called the Graph Transfer (GT) model. Figure 2 demonstrates our style transfer model with the proposed graph encoder. The input sentence is first parsed as a syntactic-semantic graph. The graph encoder encodes rich information of the graph into dense representations, which are then fed to a sequence decoder for sentence generation. The transferred sentences are sent to the discriminators for evaluation, and the scores of these sentences serve as the training signals for the generator.", "cite_spans": [], "ref_spans": [ { "start": 368, "end": 376, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "To jointly leverage information from both syntactic and semantic structures that were automatically produced by off-the-shelf toolkits, We include both into a syntactic-semantic graph,. In particular, the graph nodes are the words in the sentence and the relation tags (e.g. \"ARG1\") between word pairs. Directed edges are assigned to node pairs, and the direction is determined by the parsers. Using the parse in Figure 1 as an example, we see the relation tag \"ARG1\" connects the phrase \"my biggest complaint\" and the verb \"is\". In the syntactic-semantic graph shown in Figure 2 , we add an edge from \"is\" to \"ARG1\" , and also add edges from \"ARG1\" to \"my\", \"biggest\" and \"complaint\" respectively.", "cite_spans": [], "ref_spans": [ { "start": 413, "end": 421, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 571, "end": 579, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Syntactic-Semantic Graph", "sec_num": "3.1" }, { "text": "We use a GNN to encode our syntactic-semantic graphs. It adopts an iterative message-passing mechanism, where directly connected graph nodes pass information to each other for their state updates. As a result, these graph nodes absorb rich contextual information. Taking node i in iteration k as an example, the node first collects information along incoming edges, and obtains its forward neighbor representation m fwd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k,i m fwd k,i = 1 |N fwd i | X j2N fwd i W fwd k h k 1,j + b fwd k ,", "eq_num": "(4)" } ], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "where N fwd i is the set of incoming neighbors for node i, and W fwd k and b fwd k are model parameters. Next, the forward hidden state of node i is generated using its neighbor representations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h fwd k,i = ReLU[W fwd k h fwd k 1,i +b fwd k , m fwd k,i ],", "eq_num": "(5)" } ], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "Negative-to-Positive Positive-to-Negative Orig unfortunately , our experience did not live up to others' experiences .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "but still love this place every time regardless ! CA unfortunately , our food has nothing to be some great restaurants . but avoid this place is just every time time time ! MD pizza is always good , and the staff is great . staff is bad , we have all the food back . RL unfortunately , their experience is always to be coming back but i would not get this every time time ! GT however , our experience did such an incredible job . this place has gone down hill . is calculated from all outgoing neighbors, and the overall hidden state h k,i = [h fwd k,i , h bwd k,i ] is their concatenation. After a total number of K iterations, each node collects the information of all its neighbors within a distance of K, and they are used as the final encoder states (i.e. h i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "In the encoding stage, a sequence encoder such as a GRU network only allows information to be propagated sequentially within a sentence. Because of this, the encoding process could result in an information loss when long-range dependencies are present. Conversely, a GNN encoder allows a direct interaction between distant words that are semantically or syntactically related , thereby serving the style transfer process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Encoding with GNN", "sec_num": "3.2" }, { "text": "Dataset. We focus on the task of sentiment transfer, retaining the setting of the Yelp dataset as (Shen et al., 2017) . The dataset contains 176, 878 negative and 267, 314 positive sentences for training, 25, 278 negative and 38, 205 positive sentences for development, and 50, 278 negative and 76, 392 positive sentences for testing. We construct syntacticsemantic graphs with the Stanford dependency parser (Manning et al., 2014) and the semantic role labeler of the AllenNLP (Gardner et al., 2018) . Baselines. We compared our model (GT) with three state-of-the-art models for text style transfer:", "cite_spans": [ { "start": 98, "end": 117, "text": "(Shen et al., 2017)", "ref_id": "BIBREF19" }, { "start": 409, "end": 431, "text": "(Manning et al., 2014)", "ref_id": "BIBREF15" }, { "start": 478, "end": 500, "text": "(Gardner et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(1) Reinforcement learning based model (RL). RL is the baseline summarized in Section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(2) Cross alignment model (CA). CA transfers the text style by combining content representation with style information (Shen et al., 2017) .", "cite_spans": [ { "start": 119, "end": 138, "text": "(Shen et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(3) Multi-decoder model (MD). MD disentangles content from style, and adopts multiple decoders to produce outputs of various styles (Fu et al., 2018) . Implementation. We include the implementation details of our model in Appendix A.1.", "cite_spans": [ { "start": 132, "end": 149, "text": "(Fu et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Evaluation metrics. We use the same automatic evaluation metrics from prior studies to evaluate the outputs in terms of semantic preservation, style transfer strength and fluency. Semantic preservation s sem is measured by a sentence-similarity metric based on pre-trained GloVe embeddings (Fu et al., 2018) . Transfer strength s style is measured by the percentage of generated sentences that can be correctly classified into the target style by a pre-trained style classifier (Fu et al., 2018) . Considering the trade-off between semantic preservation and transfer strength, Fu et al. (2018) proposed an overall score s overall = ssem\u21e4s style ssem+s style , with higher scores indicating better generation quality. Similar to Gong et al. (2019) , we use the perplexity estimated by a pre-trained RNN-based language model to quantify fluency, and lower perplexity indicates higher fluency. Results. Table 1 shows the metric scores of each model for the transfer in two directions. Looking at the overall score, our model (GT) outperforms all the models for positive-to-negative transfer. It achieves a comparable performance with the RL baseline for negative-to-positive transfer, while still outperforming the other models. In particular, GT shows a largely improved semantic score compared to RL on both tasks. In terms of perplexity, GT is comparable to RL. Table 3 : Ablation study of style transfer with graph encoder. Table 2 gives examples of transferred sentences. The row Orig shows the original sentences. For both tasks, we notice that MD generates irrelevant words, such as \"pizza\" and \"staff\", that were not mentioned in the input. This behavior is reflected in its low semantic score. CA fails to transfer the sentiment for both cases, which is consistent with its low style score. RL is successful in changing the sentiment for both cases, but it largely misses input content, such as \"this place\". We note that the addition of syntactic and semantic information helps to preserve most of the original content, reflected in GT outperforming MD in content preservation by a large margin. Both our model (GT) and CA have high semantic scores, but our model does better than CA in style changing. More transferred examples are shown in Appendix A.2. Ablation analysis. We incorporate syntactic and semantic information into style transfer by both dependency parsing and semantic role labeling. To explore the benefits of each part, we performed an ablation study in the task of positive-to-negative transfer. Table 3 compares the performance of models with only dependency parsing (Dep) and with only semantic role labeling (SRL). The graph encoder with only syntactic information from dependency parser does well in changing text style while falling behind in content preservation. It achieves lower perplexity (i.e., higher fluency) than the encoder with both dependency parsing and semantic role labeling. A possible explanation is that the syntactic information provided by the dependency parser plays an important role in style and grammaticality.", "cite_spans": [ { "start": 290, "end": 307, "text": "(Fu et al., 2018)", "ref_id": "BIBREF2" }, { "start": 478, "end": 495, "text": "(Fu et al., 2018)", "ref_id": "BIBREF2" }, { "start": 577, "end": 593, "text": "Fu et al. (2018)", "ref_id": "BIBREF2" }, { "start": 728, "end": 746, "text": "Gong et al. (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 900, "end": 907, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1362, "end": 1369, "text": "Table 3", "ref_id": null }, { "start": 1425, "end": 1432, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 2522, "end": 2529, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "4.1" }, { "text": "The encoder with only semantic role labeling trades off its style transfer strength for content preservation. This is consistent with our linguistic intuition that semantic roles centrally capture sentence meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "4.1" }, { "text": "Human evaluation of the output complements the evaluation using automatic metrics for transfer quality. Accordingly, we sampled 50 positive and 50 negative sentences from the Yelp corpus, and corresponding outputs from all models. Two raters with native-like English proficiency selected the best sentence(s) among all candidates for the di-mensions of content preservation, transfer strength and fluency separately. The best sentence(s) received a score of 1, and the others received a 0. Ties were allowed, i.e., multiple transferred outputs could receive a 1 for the same input. Each output was scored along each dimension by averaging its scores from the two raters. Table 4 reports the percentage of times when each model won and when multiple models tied for positive-to-negative and negative-to-positive transfer respectively. We note that GT outperforms all baselines in all evaluation aspects. Further discussions of the human evaluation results are available in Appendix A.3.", "cite_spans": [], "ref_spans": [ { "start": 671, "end": 678, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "4.2" }, { "text": "We empirically demonstrated how including rich syntactic and semantic information can help to preserve content during text style transfer. Toward this, we compared competitive style transfer models with and without enriching inputs via syntactic and semantic structures on a benchmark dataset. We found that instead to an input text of a sequence of words alone, encoding the input sentence's syntactic-semantic graph via a graph neural network serves to explicitly highlight the sentence's core meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In this work, we have focused on the generator to improve the performance of text style transfer. One of our future directions is to incorporate better semantic metrics into the discriminators so that the training loss could measure the preservation of semantic information more accurately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This work was supported by the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR)-a research collaboration as part of the IBM AI Horizons Network. We would like to thank the anonymous reviewers for their constructive comments and suggestions. We also thank Raghavendra Bhat for data annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Graph-to-sequence learning using gated graph neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "273--283", "other_ids": { "DOI": [ "10.18653/v1/P18-1026" ] }, "num": null, "urls": [], "raw_text": "Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 273-283, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Style transfer in text: Exploration and evaluation", "authors": [ { "first": "Zhenxin", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Xiaoye", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In Thirty-Second AAAI Con- ference on Artificial Intelligence.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "AllenNLP: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "F", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1-6.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reinforcement learning based text style transfer without parallel training corpus", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Lingfei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jinjun", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Wen-Mei", "middle": [], "last": "Hwu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3168--3180", "other_ids": { "DOI": [ "10.18653/v1/n19-1320" ] }, "num": null, "urls": [], "raw_text": "Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, and Wen-mei Hwu. 2019. Reinforcement learning based text style transfer without parallel training corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 3168-3180.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Recurrent chunking mechanisms for long-text machine reading comprehension", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Yelong", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Dian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020", "volume": "", "issue": "", "pages": "6751--6761", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyu Gong, Yelong Shen, Dian Yu, Jianshu Chen, and Dong Yu. 2020. Recurrent chunking mecha- nisms for long-text machine reading comprehension. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6751-6761. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A probabilistic formulation of unsupervised text style transfer", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Xinyi Wang, Graham Neubig, and Tay- lor Berg-Kirkpatrick. 2020. A probabilistic formu- lation of unsupervised text style transfer. In 8th International Conference on Learning Representa- tions, ICLR 2020, Addis Ababa, Ethiopia, April 26- 30, 2020. OpenReview.net.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Toward controlled generation of text", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1587--1596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Disentangled representation learning for non-parallel text style transfer", "authors": [ { "first": "Vineet", "middle": [], "last": "John", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Hareesh", "middle": [], "last": "Bahuleyan", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "424--434", "other_ids": { "DOI": [ "10.18653/v1/p19-1041" ] }, "num": null, "urls": [], "raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 424-434. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to doc- ument distances. In International conference on ma- chine learning, pages 957-966.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multiple-attribute text rewriting", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y- Lan Boureau. 2019. Multiple-attribute text rewrit- ing. In International Conference on Learning Rep- resentations.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "authors": [ { "first": "Juncen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1865--1874", "other_ids": { "DOI": [ "10.18653/v1/n18-1169" ] }, "num": null, "urls": [], "raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A dual reinforcement learning framework for unsupervised text style transfer", "authors": [ { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19", "volume": "", "issue": "", "pages": "5116--5122", "other_ids": { "DOI": [ "10.24963/ijcai.2019/711" ] }, "num": null, "urls": [], "raw_text": "Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Xu Sun, and Zhifang Sui. 2019. A dual rein- forcement learning framework for unsupervised text style transfer. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelli- gence, IJCAI-19, pages 5116-5122.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/d15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neu- ral machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1412-1421.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Politeness transfer: A tag and generate approach", "authors": [ { "first": "Aman", "middle": [], "last": "Madaan", "suffix": "" }, { "first": "Amrith", "middle": [], "last": "Setlur", "suffix": "" }, { "first": "Tanmay", "middle": [], "last": "Parekh", "suffix": "" }, { "first": "Barnab\u00e1s", "middle": [], "last": "P\u00f3czos", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "2020", "issue": "", "pages": "1869--1881", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnab\u00e1s P\u00f3czos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W. Black, and Shrimai Prabhumoye. 2020. Politeness transfer: A tag and generate approach. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1869-1881. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1506--1515", "other_ids": { "DOI": [ "10.18653/v1/d17-1159" ] }, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2017, Copenhagen, Den- mark, September 9-11, 2017, pages 1506-1515. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Polite dialogue generation without parallel data", "authors": [ { "first": "Tong", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "373--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tong Niu and Mohit Bansal. 2018. Polite dialogue gen- eration without parallel data. Transactions of the As- sociation for Computational Linguistics, 6:373-389.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Style transfer through back-translation", "authors": [ { "first": "Yulia", "middle": [], "last": "Shrimai Prabhumoye", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "866--876", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 866-876.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Style transfer from non-parallel text by cross-alignment", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "6830--6841", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural informa- tion processing systems, pages 6830-6841.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A graph-to-sequence model for AMRto-text generation", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1616--1626", "other_ids": { "DOI": [ "10.18653/v1/P18-1150" ] }, "num": null, "urls": [], "raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for AMR- to-text generation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616- 1626, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A hierarchical reinforced sequence operation method for unsupervised text style transfer", "authors": [ { "first": "Chen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuancheng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "4873--4883", "other_ids": { "DOI": [ "10.18653/v1/p19-1482" ] }, "num": null, "urls": [], "raw_text": "Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun. 2019. A hierarchical reinforced sequence operation method for unsupervised text style transfer. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 4873-4883. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Sun Xu", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuancheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "979--988", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu, SUN Xu, Qi Zeng, Xiaodong Zhang, Xu- ancheng Ren, Houfeng Wang, and Wenjie Li. 2018a. Unpaired sentiment-to-sentiment translation: A cy- cled reinforcement learning approach. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 979-988.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Exploiting rich syntactic information for semantic parsing with graph-to-sequence model", "authors": [ { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lingfei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vadim", "middle": [], "last": "Sheinin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "918--924", "other_ids": { "DOI": [ "10.18653/v1/d18-1110" ] }, "num": null, "urls": [], "raw_text": "Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li- wei Chen, and Vadim Sheinin. 2018b. Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 918-924.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Stylistic chinese poetry generation via unsupervised style disentanglement", "authors": [ { "first": "Cheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xiaoyuan", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Wenhao", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3960--3969", "other_ids": { "DOI": [ "10.18653/v1/d18-1430" ] }, "num": null, "urls": [], "raw_text": "Cheng Yang, Maosong Sun, Xiaoyuan Yi, and Wenhao Li. 2018a. Stylistic chinese poetry generation via un- supervised style disentanglement. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 3960-3969.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Unsupervised text style transfer using language models as discriminators", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "7287--7298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018b. Unsupervised text style transfer using language models as discrimina- tors. In Advances in Neural Information Processing Systems, pages 7287-7298.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sentencestate lstm for text representation", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "317--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentence- state lstm for text representation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 317-327.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Mechanism-aware neural machine for dialogue response generation", "authors": [ { "first": "Ganbin", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Rongyu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Fen", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qing", "middle": [], "last": "He", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017. Mechanism-aware neural machine for dialogue response generation. In Thirty-First AAAI Conference on Artificial Intelli- gence.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "A parsed sentence with dependency arcs (top) and edges that show semantic roles (bottom).", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Text style transfer model with syntactic-semantic graph.", "type_str": "figure" }, "TABREF1": { "content": "", "num": null, "type_str": "table", "html": null, "text": "Model performance of style transfer on Yelp dataset (GT is RL with the proposed enhancement)." }, "TABREF2": { "content": "
where matrixW fwd k rameters. Similarly, its backward hidden state h bwd and biasb fwd are model pa-k k,i
", "num": null, "type_str": "table", "html": null, "text": "Examples of transferred sentences by different models." } } } }