title
sequencelengths
0
16
author
sequencelengths
0
109
authoraffiliation
sequencelengths
0
68
venue
sequencelengths
0
3
abstract
stringlengths
12
16.7k
doi
stringlengths
13
39
โŒ€
pdfurls
sequencelengths
1
1
โŒ€
corpusid
int64
148
259M
arxivid
stringlengths
9
15
pdfsha
stringlengths
40
40
text
stringlengths
2.47k
723k
github_urls
sequencelengths
0
22
[ "Hurdles to Progress in Long-form Question Answering", "Hurdles to Progress in Long-form Question Answering" ]
[ "Kalpesh Krishna kalpesh@cs.umass.edu \nMohit Iyyer โ™ \nUniversity of Massachusetts Amherst, โ™ฆ Google Research\n\n", "Aurko Roy aurkor@google.com \nMohit Iyyer โ™ \nUniversity of Massachusetts Amherst, โ™ฆ Google Research\n\n" ]
[ "Mohit Iyyer โ™ \nUniversity of Massachusetts Amherst, โ™ฆ Google Research\n", "Mohit Iyyer โ™ \nUniversity of Massachusetts Amherst, โ™ฆ Google Research\n" ]
[ "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies" ]
The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / validation overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We offer suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future. 1
10.18653/v1/2021.naacl-main.393
[ "https://www.aclweb.org/anthology/2021.naacl-main.393.pdf" ]
232,185,275
2103.06332
967425ecd3073c10c5f2472ee3030de55b26ec6e
Hurdles to Progress in Long-form Question Answering June 6-11, 2021 Kalpesh Krishna kalpesh@cs.umass.edu Mohit Iyyer โ™  University of Massachusetts Amherst, โ™ฆ Google Research Aurko Roy aurkor@google.com Mohit Iyyer โ™  University of Massachusetts Amherst, โ™ฆ Google Research Hurdles to Progress in Long-form Question Answering Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20214940 The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / validation overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We offer suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future. 1 Introduction Long-form question answering (LFQA) integrates the retrieval component of open-domain QA, which involves searching a large external knowledge source for documents relevant to a given question, with a text generation component to produce paragraph-length answers. Significant progress has been made on open-domain QA datasets such as Natural Questions (Kwiatkowski et al., 2019), whose questions are answerable with short phrases and entities, by leveraging dense retrieval techniques like ORQA (Lee et al., 2019), REALM (Guu et al., 2020), and DPR (Karpukhin et al., 2020;Lewis et al., 2020c;Izacard and Grave, 2020). Methods inspired by these results have recently been combined with pretrained language models (Lewis et al., 2020b;Petroni et al., 2020) and applied to the Reddit-derived "Explain Like I'm Five" (ELI5) dataset (Fan et al., 2019), which is the only publicly-available large-scale LFQA dataset. The recently proposed KILT benchmark (Petroni et al., 2020), which compares retrieval-augmented models across a variety of knowledge-intensive tasks including ELI5, automatically evaluates LFQA models by the quality of both generated answers (ROUGE-L against reference answers) and retrieved documents (R-precision against humanannotated relevant documents). In this paper, we build a state-of-the-art system 2 for ELI5 by using a sparse Transformer variant (Roy et al., 2020) to condition over Wikipedia paragraphs returned by a REALM-style retriever (Guu et al., 2020). However, despite its success on the KILT leaderboard, our system does not actually use the documents that it retrieves! To measure the effect of retrieval on generation quality, we design a control experiment in which retrieved documents are replaced with randomly-sampled documents at inference time. Results from both human A/B tests and automatic metrics like ROUGE-L demonstrate that conditioning on random documents has almost no effect on generated answer quality (Figure 1c). We recommend that future LFQA research report the results of such control experiments in addition to reporting generation and retrieval quality. How can a system using random retrieval per- form well on ELI5? Our analysis reveals that this result is partially due to significant train / validation overlap in the ELI5 dataset (Figure 1a), which eliminates the need for external retrieval. A human study shows that at least 81% of validation questions have a paraphrase in the training set, and almost all validation questions are topically similar to a training set question. While Fan et al. (2019) attempted to identify and remove question overlap using TF-IDF similarity, more complex semantic matching methods & human verification is needed to address this issue in future LFQA datasets. Digging deeper, we identify fundamental issues with using ROUGE-L to evaluate generated answer quality (Figure 1b). Simple baselines such as just repeatedly copying the question, or choosing a random training set answer, can outperform LFQA systems such as RAG (Lewis et al., 2020c) in terms of ROUGE-L. On the other hand, our system achieves higher ROUGE-L than reference human-written answers, which is misleading since human A/B testers strongly prefer reference answers to our system's. We conclude that ROUGE-L is not a reliable metric to evaluate LFQA due to its large and relatively unconstrained output space (e.g., compared to translation or summarization), and we offer suggestions for better automatic & human evaluations to enable meaningful progress on this task. A state-of-the-art LFQA system The ELI5 task (Fan et al., 2019) asks models to generate paragraph-length answers to open-ended questions in English that often rely on world knowledge (e.g., how do jellyfish function without brains or nervous systems?). LFQA systems thus benefit from conditioning answer generation on relevant documents from the web (such as the Wikipedia article about jellyfish). While large-scale pretrained language models store surprising amounts of world knowledge within their parameters (Petroni et al., 2019;Roberts et al., 2020), external document retrieval not only augments this intrinsic knowledge but also grounds model outputs in a knowledge source, which provides interpretability. In this section, we describe our proposed LFQA system, which conditions answer generation on Wikipedia articles identified by a pretrained retriever. We use a dense retriever trained by scaling up a distantly supervised algorithm from Jernite (2020). Since retrieved articles can be quite long and often exceed the maximum sequence length of pretrained models like BERT (Devlin et al., 2019), we use a sparse-attention variant of the Transformer to allow modeling over longer sequences. While our system sets a new state-of-the-art on ELI5, we question the significance of this result in Section 3. Retriever We begin by specifying our dense retriever ("contrastive REALM" or C-REALM), which returns documents related to an input question. Consider a corpus of long-form questions and answers, represented by (q i , a i ) N i=1 . Our retriever uses q i as a query to retrieve K documents (r i,j ) K j=1 from a knowledge corpus (Wikipedia), which is enabled by an encoder network that projects both questions and candidate documents to a 128-d shared embedding space. Like REALM (Guu et al., 2020), our encoder is a BERT-base Transformer (Devlin et al., 2019) with a final projection layer. Since the ELI5 dataset does not include gold retrievals, we train our retriever by scaling up a method recently introduced by Jernite (2020) that uses gold answers for distant supervision. The key idea is to push the encoded vector for a question close to a vector representation of its groundtruth answer(s), but away from all other answer vectors in the mini-batch (negative examples). Intuitively, this method works because both ELI5 answers and external documents are of paragraph length (documents are paragraph-length chunks from Wikipedia). Concretely, we optimize the loss, loss = โˆ’ (q i ,a i )โˆˆB log exp q i ยท a i a j โˆˆB exp q i ยท a j where B is the mini-batch and q i , a i are the encoded vector representations for (q i , a i ). This objective is based on contrastive learning, a method that has been used effectively for semi-supervised learning (Chen et al., 2020) and dense retriever training (Karpukhin et al., 2020). Scaling up from Jernite (2020), who used a mini-batch size of 512 and initialized their retriever with BERT, we use much large mini-batches of size 12,288 (and hence, many more negative examples) and initialize our retriever with a strong pretrained retriever, the REALM model (Guu et al., 2020) trained on the Common Crawl News (CC-News) corpus. These design decisions greatly improve retriever quality, as we observe in an ablation study (see Appendix A.2). During inference, we perform a maximum inner-product search (MIPS) with the ScaNN library (Guo et al., 2020) to efficiently find the top K documents. In all our experiments we use K = 7, following the setup in Guu et al. (2020). Generator We next describe our generator model, which conditions its generated answers on retrieved documents returned by C-REALM. We use the Routing Transformer (RT) from Roy et al. (2020), which is the current state-of-the-art in long-form language modeling. The RT is a sparse attention model that employs local attention as well as mini-batch k-means clustering to better model long-range dependencies in sequences (attention maps in Appendix A.1). Long-form language models such as RT are wellsuited to ELI5 as the task requires conditioning answer generation not only on a short question but also many lengthy retrieved documents. We pretrain our RT model on PG-19, a longform language modeling benchmark (Rae et al., 2020) created from approximately 28,000 Project Gutenberg books published before 1919. PG-19 has 1.9B tokens and an average context size of 69K words. While this data is out-of-domain for ELI5, we choose it to encourage long & coherent generation. Our RT is a 22-layer model with 1032 hidden units (486M parameters), maximum sequence length of 8192 tokens, and a vocabulary of 98K subwords. 3 We fine-tune our model in a decoderonly fashion (Liu et al., 2018;Wolf et al., 2018) by concatenating the top K retrieved documents to the question [r i,K , r i,Kโˆ’1 ... r i,1 , q i , a i ] and training the model to predict tokens of the answer a i . We do not backpropagate gradients through the retriever. 4 Retrievals slightly improve perplexity (18.1 vs 17.8) as seen in Wang and McAllester (2020), but do not improve generations ( ยง3.1). are hidden, and hosted on a public leaderboard in the EvalAI platform (Yadav et al., 2019). Answer quality is measured by the maximum overlap of generations with a set of gold answers in terms of unigram F1 score and ROUGE-L (Lin, 2004). Petroni et al. (2020) collected human annotations of Wikipedia articles which support ELI5 gold answers, which enables measuring retrieval quality by computing R-precision (if the top-1 retrieval matches the annotation) and Recall@5 using the top-5 retrievals. Finally, the KILT benchmark combines R-prec. and ROUGE-L to measure the overall performance of the system by "KILT ROUGE-L". This metric is similar to ROUGE-L, but assigns a score of 0 whenever the top-1 retrieval does not match the gold annotation. Results: Table 1 contains our results on the test set of the ELI5 (also on the public KILT leaderboard). We present four variants of our system, using a different retriever during inference (REALM or C-REALM), and different nucleus sampling p values (Holtzman et al., 2020). All variants outper-Q: Why are almost all boats white? Main Experiments A: Boats are generally painted white for obvious reasons, as they are inexpensive and easy to clean. Ships are also often painted to provide a visible visual contrast to the sea. Boats also often have high visibility ports to allow for easy international trade. In addition, many boats have an airbreathing function that makes them easier to spot and stay afloat, which is a good thing if you're a pirate or trying to get into some country's waters. Lastly, many ships are built in the last few decades, so they are not as exotic looking as they were thirty years ago. And lastly, many ships are made out of materials that are easy to work with, and so many of them are painted white as a practical matter. There are many, many more reasons for why ships are white, but those are the ones I know of. I'm sure there are many other reasons, but those are the ones I know of. Edit: I forgot to mention that the other answers about visibility and easy to see ports are also true. Ships are painted to appear to be larger, and more expensive. That makes them appear more impressive, and so they are more likely to be bought by people who want to show off their wealth and status. form prior work in generation quality, with lowerentropy models (p = 0.6) performing best. 6 C-REALM performs competitively to RAG and DPR despite being only distantly supervised, and outperforms REALM. Our proposed RT+C-REALM system achieves a new state-of-the-art on combined performance (KILT R-L). Generations from our model are provided in Figure 2 and Appendix A.4. Analysis In this section, we conduct a thorough analysis of our model's usage of retrievals (Section 3.1), the impact of overlap in ELI5's train / validation / test folds (Section 3.2), issues with ROUGE-L and performance bounds (Section 3.3), and the difficulty in human evaluation for this task (Section 3.4). At the end of each section, we provide short takeaways with suggestions for future work. Are generations grounded in retrieval? While our retrieval-augmented system achieves state-of-the-art performance, we find little evidence that it is actually using the retrieved documents. To measure this, we run an ablation study where at inference time we replace retrieved paragraphs with vs predicted retr. vs random retr. R-L 1-g 2-g 1-g 2-g Table 2: Comparison of generations (with p = 0.6) conditioned on predicted retrievals (Predicted) and randomly chosen retrievals (Random). Notice small differences in: (1) ROUGE-L vs gold answers (R-L); (2) n-gram overlap (n-g) with predicted retrievals (vs predicted retr. (ยท). Annotators are shown a question along with two answers (A, B) in random order and ask them to choose one (details in Appendix A.5). For both model variants (p = 0.6, 0.9), we see (1) little difference between generations conditioned on predicted (pred.) or random (rand.) retrievals; (2) strong preference for gold answers over generations. randomly sampled paragraphs from Wikipedia. We compare this Random baseline with our original system (Predicted) in terms of generation quality as well as the n-gram overlap between the generation and the retrieved paragraphs. Generations are similar irrespective of type of retrievals: We present our results in Table 2. Despite not being conditioned on any meaningful retrievals, the Random retrieval model has similar ROUGE-L scores as our Predicted system. Moreover, generations from the Random and Predicted models have similar amounts of 1-gram and 2gram overlap with the paragraphs retrieved by C-REALM, despite the fact that the Random model does not actually see the retrieved paragraphs. 7 The n-gram overlaps are possibly overestimates due to stopwords (e.g., prepositions, punctuation) and entities which are copied from the question. vs qn. vs predicted retr. vs random retr. but not in qn. but not in qn. (lemmatized nouns, proper nouns, numbers only) Table 4: A fine-grained version of Table 2 measuring the unigram overlap of nouns/numbers in the generations with the input question (vs qn.), retrievals predicted by C-REALM (vs predicted retr.) and randomly sampled retrievals (vs random retr.). Similar to Table 2, notice very little difference with and without retrieval. To tackle this issue, in Table 4 we measure the fractions of lemmatized nouns, proper nouns and numbers in the generated answer which are present in the predicted retrievals but not in the question. We notice similar trends as before, with only small differences between the two systems. Finally, there is almost no correlation (Spearman ฯ = 0.09) between the Predicted model's generation quality and the amount of unigram overlap between its outputs and the retrieved documents (scatter plots in Appendix A.7), strengthening our hypothesis that generations are not grounded in retrievals. 8 Human evaluation validates our findings: As ROUGE-L and n-gram overlap have major limitations for LFQA (Section 3.3), we perform additional human A/B testing on the output of Random and Predicted. Specifically, we ask human volunteers 9 to choose between answers generated by the two systems (presented in random order). As seen in Table 3, humans struggle to choose which of the two answers is more relevant to the question. For both model variants (p = 0.6, 0.9), there is a less than 7% preference for a particular answer type, with humans preferring answers (by 6%) from the Random model for p = 0.9! Other systems also have this issue, possibly due to source-reference divergence and trainvalidation overlap: We note that this issue is not unique to our system -other systems on the KILT leaderboard like BART + DPR and RAG actually perform worse than their no-retrieval counterpart (BART) in generation quality, as shown in Table 1. Qualitatively, we found no evidence of retrieval usage in a publicly hosted ELI5 model demo by Jernite (2020). 10 A possible explanation for this issue is high source-reference divergence, a common problem in table-to-text generation (Wiseman et al., 2017;Tian et al., 2019). In Table 2 and Table 4, we measure the n-gram overlap of top-ranked gold validation answers (Gold Ans) with predicted retrievals. This overlap is low and similar to that of our generations, which we suspect encourages our model to ignore retrievals. A second explanation is the large amount of train-validation overlap (Section 3.2), which eliminates the need for retrieval. , an unconstrained dialogue generation task with single sentence dialogues (much shorter than ELI5). As seen on the public KILT leaderboard, 12 our system has lower ROUGE-L scores than the BART / RAG baselines. Another possible explanation is issues with ROUGE-L itself, as discussed in Section 3.3. Takeaway (better evaluation of grounding): For evaluating LFQA, it is important to run control experiments with random retrievals & measure grounding of generations in retrieval. While the KILT benchmark does attempt to measure the com-10 https://huggingface.co/qa 11 While we do not have access to generations from baselines on the KILT leaderboard, example generations from the demo of the BART model in Jernite (2020) are significantly shorter (59 words avg.) than our generations (187 words avg.). 12 https://eval.ai/web/challenges/ challenge-page/689/leaderboard/1909 bined retrieval + generation performance via KILT RL, it does not check whether the generations actually used the retrievals. In other words, one can submit independent retrieval & generation systems, but still perform well on the combined score. This may not be an issue for short-form QA tasks like Natural Questions, since the gold answer is often exactly contained as a span in the gold retrieval. Also, as retrieval might be less important for large language models with parametric knowledge (Roberts et al., 2020), the KILT-RL strategy of simply aggregating top-1 retrieval score with ROUGE-L unfairly penalizes systems not relying on retrieval. 13 Training / Validation Overlap Our experiments in Section 3.1 show that model performance is mostly unchanged by conditioning generation on randomly sampled retrievals instead of predictions from C-REALM. Despite not using retrievals, we observe qualitatively that our model displays a large amount of parametric knowledge ("Faraday Cage" in Figure 1c), which is surprising since it was pretrained on novels from Project Gutenberg (not Wikipedia). In this section, we discover that a major reason for ignoring retrievals is the large amount of train / validation overlap in ELI5. While Fan et al. (2019) attempted to fix this issue through TF-IDF overlap, this method is insufficient to identify all question paraphrases, as we find significant overlap between the training set and the KILT validation set of ELI5. 14 ELI5 is not the only dataset with substantial train / test overlap: Lewis et al. (2020d) identify similar issues with short-form QA datasets like Natural Questions. Finding similar questions & measuring overlap: We use our retriever C-REALM to retrieve similar questions from the training set, since it has learned to map questions to a feature-rich embedding space. For each validation question, we retrieve the 7 most similar training set questions. We use both human and automatic evaluation to calculate the amount of overlap. For human evaluation, we show annotators on Amazon Mechanical Turk 15 a validation set question and a retrieved training set question, qns with at least one train set paraphrase 81% qns with at least one train set topically similar 100% % of all pairs marked paraphrases 39.5% % of all pairs marked topically similar 47.8% % of all pairs marked as non-paraphrases 12.7% Table 5: A human evaluation measuring the amount of overlap between validation set questions (qns) and retrieved questions from the training set. and ask them to annotate the pair as 0: No paraphrase relationship; 1: on similar topics, but different questions; 2: approximately the same question (an adaptation of the paraphrase evaluation of Kok and Brockett, 2010). We take 300 validation set questions and ask three crowd-workers to rate them against retrieved training questions on this scale, and consider the label with majority rating. To improve quality, we manually verify their annotations. Table 5 shows that 81% of validation set questions have at least one paraphrase in the training set, while all annotated questions have at least one topically similar question in the training set, which indicates substantial training / validation overlap. The experiment had "fair agreement" with a Fleiss ฮบ of 0.29 (Fleiss, 1971;Landis and Koch, 1977). As manually annotating question overlap can be expensive and time-consuming, we also experiment with automatic overlap detection methods. In particular, we use a RoBERTa-large binary classifier (Liu et al., 2019) fine-tuned on the Quora Question Paraphrase (QQP) dataset (Iyer et al., 2017) from the GLUE benchmark (Wang et al., 2019). For 43.6% of the ELI5 validation set, this classifier marked at least one retrieved question as a paraphrase (46% for the 300 questions we annotated). Qualitatively, we notice that this classifier often mis-classifies retrieved questions that are valid paraphrases but exhibit significant lexical or syntactic divergence. This observation, along with the smaller fraction of valid paraphrases in the QQP training set (37%), partially explains the gap between automatic & human evaluations. Using retrieved QA for generation: Since ELI5 contains significant amount of overlap between the training and validation sets, a system can simply copy the answers of retrieved training set questions instead of actually doing generation. Table 7 shows that by using the longest answer within the top-K retrieved questions, we outperform two prior systems (RAG, BART + DPR) that use retrieval-augmented generation. As an upper Table 6: ELI5 performance difference (for the p = 0.6 model) between subsets of validation QA having a question paraphrase (overlap) and not having a question paraphrase (not overlap) in the training set. We see the overlap subset has much better retrieval performance and slightly better generation performance. bound, we also consider a system which uses the best possible answer to retrieved training set questions in terms of ROUGE-L (best top-K train answer). This system gets 28.5 ROUGE-L, outperforming all others. ELI5 performance on overlapping QA: Finally, we measure the performance difference between validation questions that overlap with the training set vs. those that do not. Since we only have human annotations for 300 questions (the nooverlap subset has only 53 samples), we present this analysis using the QQP classifier's outputs as well. In Table 6, we notice large differences of 6.6 RPrec, 8.1 R@5 in retrieval performance favoring the overlap subset, but only a small generation score gain of 0.8 F1, 0.4 R-L (which may be misleading as discussed in Section 3.3). Takeaway (careful held-out curation): Based on our findings, we suggest that more careful dataset curation for LFQA tasks is needed to prevent duplicates. While we acknowledge the efforts of Fan et al. (2019) to fix this issue, we also suggest alternative methods to control overlap and focus on evaluating generalization in held-out sets: (1) automatically retrieving paraphrases and then running human validation to eliminate them; or (2) holding out entire genres or domains to reduce the possibility of overlap -for example, keeping Q/A on Sports only in the held-out sets. Note that simply pruning the existing splits using these criteria will significantly reduce the size of the held-out datasets; so we suggest re-splitting the train/validation/test splits from the entire pool of collected questions. ROUGE-L Bounds on ELI5 Performance We have seen that simply copying the answer of a close question paraphrase from the training set achieves 28.5 ROUGE-L with an optimal selection among retrieved questions and outperforming all computational models. (1) copy the question 5 times and concatenate, as longer outputs boost ROUGE-L (Appendix A.6); (2) retrieve a random training set answer. Our first baseline contains entities often present in the gold answer, but without actually answering the question. Our second baseline follows the "style" of an answer but is completely off-topic. As an upper bound, we estimate the ROUGE-L of gold answers themselves. On an average, there are 12 gold answers per question, so we measure the ROUGE-L of the longest gold answer with respect to the other gold answers. We also measure the maximum pairwise ROUGE-L between two gold answers for the same question. 16 We only calculate upper bounds for the validation set, since the gold answers of the KILT test set are hidden. Lower bounds beat prior work, upper bounds have low ROUGE-L: We compare our bounds with actual retrieval augmented generation systems in Table 7. Both our lower bounds (random training answer, copy input) are quite competitive, outperforming RAG (Lewis et al., 2020c) and performing close to BART + DPR (Petroni et al., 2020) without actually answering the question! This shows that ROUGE-L is fairly sensitive to simply copying entities from the question 16 Note that different gold answers were not written independently as Reddit users writing answers can read existing answers and may want to provide a non-overlapping perspective. Due to the high train/valid overlap, the best top-7 retrieved answer could be a better upper bound since it is from another Reddit post (and performs better than best gold answer). Suspecting that this result is misleading, we run another human A/B test by showing volunteers a question and asking them to choose between answers generated by our system and the longest gold answer, shuffled at random. 17 As seen in Table 3, the majority of humans prefer the gold reference answers vs generations (68% vs 14% for p = 0.6). In interviews with human annotators after completing the task, they reported that both answers were often fluent and stylistically similar, but one eventually veered off-topic. Takeaway (better automatic metrics needed): Our experiments demonstrate that computing the ROUGE-L of generations against gold answers is not a meaningful way to evaluate LFQA systems, since it is not selective enough to differentiate between valid/invalid answers. There is a very small margin of improvement between trivial lower bounds and strong upper bounds, with the absolute scores of upper bounds being quite low. We suspect this is due to the long length of answers and fairly unconstrained and large output space. Difficulty of Human Evaluation To better understand the inherent difficulty of evaluation in ELI5, we interviewed human annotators (of Table 3) and found two challenges: (1) Unfamiliarity with question topics: While most annotators found the Q/A interesting, they were often unfamiliar with the technical topics discussed in the questions. This made it hard for them to assess answer correctness. The ELI5 dataset has questions in a wide variety of topics (History, Politics, Biology etc.), while most annotators were Computer Science graduate students. While we did allow annotators to use Wikipedia, they mentioned domain-experts will be better judges of factual correctness of answers. (2) Length of Answers: Annotators mentioned the paragraph-long length of answers made the task quite challenging. Annotators reported taking an average of 2 minutes per answer pair, many of which required careful thought & concentration. This was especially difficult when only part of the answer was correct and the rest had contradictions or repetitions, a common theme in our generations. Ethical Considerations Our system faces a similar set of issues as most modern text generation technology, like fabrication of facts (Zellers et al., 2019), potential for misuse (Brown et al., 2020) and reflecting biases prevalent on Reddit (the ELI5 dataset has been built using the r/ELI5 subreddit). In our work, we attempted to make text generators more factually grounded by conditioning generations on retrieved Wikipedia articles, hoping to reduce fact fabrication. Unfortunately, a thorough analysis (Section 3.1) has revealed that our system is still not grounding its generations in retrievals, and we have recommended the design of better metrics to measure factual correctness to tackle this issue. Our final models were trained using 64 Google Cloud TPUs for a total of 32 hours. As mentioned in the Google 2019 environment report, 18 "TPUs are highly efficient chips which have been specifically designed for machine learning applications". These accelerators run on Google Cloud, which has "matched 100% of its electricity consumption with renewable energy purchases, and has committed to fully decarbonize its electricity supply by 2030" (https://cloud.google. com/sustainability). More details on training time are provided in Appendix A.1. Similar to the REALM implementation, we use separate processes to run the retriever and generate training data (using a MIPS search). Since our retriever is frozen, we do not use the document index refresher available in their codebase. References Retriever: Our retriever is trained on 64 Google Cloud TPUs for a total of 4k steps and a batch size of 12288. We do early stopping on the validation data (with a smaller batch size of 512 due to smaller P100 GPU memory). Our model converges quite fast, reaching its best performance in 1.5k steps (in 43 minutes) and needing 103 minutes for the full set of 4k steps. Generator: Our generator is trained on 64 Google Cloud TPUs, for a total of 100k steps on the ELI5 training set. We use the pg19_local_cluster8k configuration available in the Routing Transformer implementation. Besides the default hyperparameters, setting 15% input, attention and ReLU dropout was critical to prevent overfitting on the training set. We use a learning rate of 5e-5. Our retrievals, questions and answers are truncated / padded to 288 subword tokens (using the PG19 subword tokenizer). We use a minibatch size of 128 QA pairs, which corresponds to 332k tokens per mini-batch (of which, the loss is computed over the last 288 answer tokens, or 37k total tokens). We do not compute loss over padded tokens, and use special symbols to separate different parts of the input context. We reverse the retrieved paragraphs in context since the model uses local attention layers, and we wanted higher ranked retrievals to appear closer to the answer tokens. Our models take about 30 hours to finish 100k steps (0.92 steps / second). Hyperparameter Choices: We experimented with several different pretraining strategies (using Wikipedia), smaller model variants and hyperparameter choices manually in preliminary experiments. All these experiments performed quite poorly on ELI5, producing very short and sometimes incoherent responses. Finally, switching to a Routing Transformer model which was pretrained on a longform language modeling dataset (PG-19) significantly improved generation quality. Hyperparameters for this pretrained model (like hidden size / number of layers) were manually chosen with model capacity in mind. For our final experiments with this pretrained model we did not perform any hyperparameter search during training, primarily due to the expensive setup required to train the system. During inference, we tuned the nucleus sampling value from 0.0 to 1.0 in increments of 0.1, choosing the value with the best validation set performance. Our hyperparameter choices for contrastive learning on the retriever have been justified in an ablation study in Appendix A.2. Notably, we use very large minibatches of 12,288 to scale the number of negative examples. To train this model, we used the standard trick of data parallelism across 64 hardware accelerators. This resulted in an effective mini-batch size of 192 per chip, which is small enough to fit a BERT-base sized model on a TPU v3 chip's memory. To accumulate information across different chips before the final softmax, we used the tf.tpu.cross_replica_sum function (using an open-source wrapper found here). A.2 Ablation Study of C-REALM One of our contributions is scaling up a distantly supervised objective for training retrievers on ELI5, originally described in Jernite (2020). This method uses in-batch negative sampling, making minibatch size a critical hyperparameter for better constrastive learning. We perform controlled experiments initializing our retrievers with REALM-CCNews (Guu et al., 2020) and varying batch size and keeping all other hyperparameters consistent. In Table 8, we notice a steady increase in performance as minibatch size is increased, with the largest gains coming by doubling the batch size in Jernite (2020) from 512 to 1024. Finally, in preliminary experiments we saw no benefit of more intelligent negative sampling schemes. Batch size R-Prec Recall@5 REALM (pretrained) 6.6 14.9 256 6.2 11.0 512 (Jernite, 2020) 6.8 12.6 1024 11.5 21.0 12288 (Ours) 13.3 21.2 Table 8: The effect of minibatch size on the validation performance of C-REALM. As a baseline, we also add the retrieval performance of the REALM pretrained model which is used as an initialization. Next, we investigate the effect of initialization on the training of C-REALM. Unlike Jernite (2020) who initialize their model with BERT, before training we initialize our retriever with a pretrained self-supervised retriever. As a baseline, we initialize our model with ICT, a weaker self-supervised retriever introduced in Lee et al. (2019). Both models are trained with minibatch sizes of 12228. In Table 9, we notice a large improvement in performance when using a better initialization, confirming our design decisions. A.3 Number of trainable parameters In Table 10 we present the number of trainable parameters in our model compared to baselines on the leaderboard. Our generator is slightly larger than the models used in prior work, but we utilize a smaller retriever due to the shared query and candidate encoders in REALM. Overall, our system has a similar total number of parameters as baseline models like RAG and BART + DPR. Initialization R-Prec. R@5 REALM (pretrained) 6.6 14.9 ICT (Lee et al., 2019) 9.3 16.5 REALM (Guu et al., 2020) 13.3 21.2 Table 9: The effect of initialization on C-REALM. As a baseline, we also add the retrieval performance of the REALM-CCNews pretrained model without any finetuning on ELI5. Model Generator Retriever Index T5-base 220M - - BART 406M - - RAG 406M 220M 15B BART + DPR 406M 220M 15B RT + C-REALM 486M 110M 15B A.4 Generations from our System More generations have been provided (along with retrievals, highlighted to show n-gram overlap) in the supplementary material (data) as HTML files. We also present a few samples in Table 16. A.5 Human Evaluation Setup We conducted several A/B tests between variants of our model using human annotators. We asked a total of 20 participants for help who voluntarily agreed to help with the annotation process. Most participants were English-speaking graduate students in computer science. In every test, participants were shown a question along with two answers (generated by different systems) presented in a random order. They were then asked to choose which generation (1) answered the question better / which answer was more relevant to the question; (2) was more coherent / had less repetition; (3) was more factually correct. Since some annotators had a limited time, we asked them to prioritize question (1) over (2) / (3). Annotators were allowed to select "Tie" if they could not choose between the systems. We also permitted them to use search engines, but suggested restricting search to Wikipedia. We present all our results in Table 15. We also interviewed some participants after the annotation process and discuss our findings in Section 3.4. Note that while these A/B tests help us understand which system is relatively better, they do not provide an absolute measure of performance (Celikyilmaz et al., 2020) -annotators reported that there were cases where both answers were very good and other cases where both were very poor. This is a limitation of A/B testing. A.6 Effect of length on ROUGE-L In this section we measure the effect of outputs lengths on ROUGE-L scores. To conduct this experiment, we truncate generations by our system to a fixed fraction of tokens across all instances. As we see in Table 11 in the Truncate column, shorter generations tend have lower ROUGE-L. To disentangle the effects of length and content, we also measure the generation quality by repeating the truncated generations several times until it matches the original generation length. In the Repeat 1/f times column, we notice a gap between our model's original generation (24.4 ROUGE-L) and the equallength truncated generations with repetition. These results indicate that while length helps improve ROUGE-L scores, simple repetition is insufficient. A.7 More experiments on measuring retrieval grounding of generations In this section we provide some more experiments testing the grounding of generations in retrieved documents. Overall, trends are consistent with our observations in Section 3.1. Scatter plots between generation quality and unigram overlap with retrievals: We present this scatter plot in no correlation between the two quantities, with Spearman ฯ = 0.09. Instances with correct predicted retrieval: In Table 12, we present results similar to Section 3.1 considering only those instances where at least one retrieved document matched the gold annotation (roughly 23% instances). We also present a scatter plot on the same set of instances in Figure 5 and note a low correlation of ฯ = 0.13. vs predicted retr. vs random retr. R-L 1-g 2-g 1-g 2-g Table 12: Comparison of generations conditioned on retrievals from C-REALM (Predicted) and randomly chosen retrievals (Random), for those cases where C-REALM predicted the correct retrieval. Notice very small differences in generation quality (R-L) as well as the fraction of n-grams (n-g) in the generation overlapping with retrievals predicted by C-REALM (vs predicted retr.). To control for overlap due to stopwords, we also add n-gram overlaps with the randomly sampled retrievals. model between generative quality (ROUGE-L vs reference on X-axis) and grounding with retrieval (unigram overlap with retrieved documents on Y-axis). Unlike Figure 4, this plot only considers those cases where C-REALM predicted the correct retrieval. The plot shows very little correlation between the two quantities (Spearman ฯ = 0.13). Experiments with vs predicted retr. vs random retr. R-L 1-g 2-g 1-g 2-g Table 13: Comparison of generations (with p = 0.9) conditioned on retrievals from C-REALM (Predicted) and randomly chosen retrievals (Random). Notice very small differences in: (1) ROUGE-L vs gold answers (R-L); (2) n-gram overlap (n-g) with retrievals predicted by C-REALM (vs predicted retr.). Gold answers also have a similar overlap with predicted retrievals. To control for overlap due to stopwords, we also add n-gram overlaps with the randomly sampled retrievals. higher nucleus sampling values. As we saw in Section 2.3, these generations tend to be more fluent and coherent, but less relevant to the question. In Table 13 and Table 14 we find consistent trends as Section 3.1, with very little difference between models conditioned on retrievals from C-REALM and random retrievals. vs qn. vs predicted retr. vs random retr. but not in qn. but not in qn. (lemmatized nouns, proper nouns, numbers only) A: The same thing that causes the clouds you see at the start of a storm to form in the first place. The air pressure changes as the jet moves through the air, causing it to cool down and condense into water droplets. These droplets are in fact a lot bigger than the water droplets in the clouds, so when the plane passes through them, they can cause the air pressure to change, causing the water droplets to condense into clouds. This is what you see when you look at a cloud formation from the ground. In addition to this, when the plane passes through the clouds, it also moves through the air, causing it to change its temperature, causing the water droplets to condense and form clouds. These clouds are often quite large, and can block out the sun. This is what you see when you look at a contrail. Finally, the water vapor that forms at high altitudes is cooled by the air above, and as it cools, it condenses into clouds. This is what you see when you look at a cumulonimbus cloud. These clouds are even larger than the water droplets in the clouds, and they block out the sun. Q: How do shampoo and conditioner work? A: The key ingredients in shampoo and conditioner are usually surfactants, they tend to attract oil from your skin. Shampoo will usually contain an anti-dandruff agent (dandruff is caused by fungi in your skin which cause dandruff). They also contain some moisturizing agents (moisturizers are things that stay in your skin longer and protect your scalp from cleaning), and a lot of anti-dandruff (dandruff is a side effect of high doses of both surfactants and moisturizers) Conditioner will contain moisturizing agents, like a skin moisturizer. They tend to cause oils to be separated from your hair (e.g. wash your hair with conditioner instead of shampoo). They both have a foaming agent that makes them foam up (usually Hydrogen Peroxide). They also have a foaming agent that stops dirt from attaching to your hair. Some conditioner will also suppress the action of the shampoo (to help prevent dirt from attaching to your hair). So what's the difference between shampoo and conditioner? Well, if you combine both the shampoo and conditioner Q: When glancing at a clock, why does the first second after glancing at it sometimes feel longer than the rest? A: This is known as chronostasis. Your brain gets used to seeing the second hand move after the first, so it ignores the first one. If you were to look at the clock a lot, you'd see the second hand move a lot more than the first, and you'd be able to tell the difference between the second and the first. The second hand will appear to move much slower than the first hand because your brain is used to seeing the second hand move a lot more than the first. If you were to go on a long trip, you'd get used to the second hand moving a lot more than the first, and you'd have no problem telling the difference between the second and the first hand. It's kind of like how you don't notice your nose when you're driving, but if you look at it for a while, you start to notice it. Your brain is just getting used to seeing the nose move a lot more than the other, and it can't tell the difference between the two. It's kind of like that with your eyes. Q: Why do Britain and other English empire countries still bow to monarchs? What real purpose does the queen serve? A: Depends what you define as "purpose". There is no specific purpose for the British monarch -rather, they are figureheads that serve a role. The role is to have a high degree of influence over a country, though the rules of the monarchy can somewhat vary from country to country, and the ceremony of "kissing the royal corpse" does not always involve kissing the monarch. Whether that's the only reason for the custom, or if it was the only reason, I can't say, but that's the reasoning. When the U.K. was in full power, the monarch was a puppet of Parliament, and the powers of the monarch were transferred to the Prime Minister and thus to the Prime Minister's deputy, who then became the Prime Minister. Since then, Parliament has been able to vote on legislation that goes through the monarch, although they may still act as the monarch's representative in negotiating treaties, which can have very very deep consequences. The Queen's role, as a representative of Britain, doesn't necessarily involve her formal approval of any of the laws or legislation that goes through Parliament, though. All that's left is for the legislature to pass laws Q: What exactly is fire, in detail? How can light and heat come from something we can't really touch? A: Fire is the chemical reaction of fuel (oxygen) with (what we call) impurities (ash, soot, oil etc). These impurities are created due to a number of things. If you were to study the chemical composition of a log (or a candle) and analyse it you would discover the chemical formula (A, C, and G) which is a simplified version of what actually happens. There is always fuel, and there is always the ash/soot. Wood is made up of mostly carbon and hydrogen and these are the fuels that create the fire. Hydrogen and oxygen are reactive, and they form water (H2O) and CO2 (CO2). If we split a log into smaller pieces, we start to see the process of combustion as it occurs. The carbon is burned, and with the explosion of CO, H2O and H2CO2, we get water and carbon dioxide (mostly). If we smash a log together, we start to see what happens when we also hit another log with another log...imagine smashing two sand castles together. We see how the sand castles and brick castles form and collapse at different rates...then it starts to burn. The smoke, ash and flames are the actual combustion of fuel. Q: What goes on in those tall tower buildings owned by major banks? A: The actual buildings are not there for profit but for show. There are many real world historical buildings that have little to no effect other than being a symbol of an economic entity or symbol of an era or location, and nothing more. For example look at Sears, Sears didn't care what went on inside, it was all about the _appearance_ of its location, the prestige of the location, the facilities and so on. It didn't care about how long it took it to operate, it was about how much people would pay to go see it. Sears was a landmark as a cultural movement and other big companies followed suit, so if you want to see a building you've never seen before, you have to go see Sears, just like you have to see a Toyota Camry for Toyota Camry. They used to be all about building new factories, some of them if I recall, but now that they're bigger, that means that more factory jobs are coming to them. You've probably seen them in stores as stores where people buy and sell stuff, so there aren't that many places for them to come from. Instead, it's just for show, a symbol of rich people. ( a )Figure 1 : a1Many held-out questions are paraphrased in the training set. Best answer to similar train questions gets 27.4 ROUGE-L (d) Annotators find it difficult to judge long answers (with repetition) & correctness of technical content (c) Conditioning answer generation on random documents instead of relevant ones does not measurably impact its factual correctness. Longer outputs get higher ROUGE-L A summary of the major hurdles (a-d) to progress in long-form question answering with ELI5. Baselines: We compare our model with the other entries on the ELI5 KILT leaderboard which are either generation-only, like T5-base (Raffel et al., 2020) and BART (Lewis et al., 2020b), or variants of BART using retrieval such as RAG (Lewis et al., 2020c) and BART + DPR (Petroni et al., 2020). These systems are based on massive pretrained language models, with similar number of parameters as our model (details in Appendix A.3). Figure 2 : 2Example generation from our LFQA system with p = 0.9. Generations are long & coherent, but suffer from repetition towards the end. (more in Appendix A.4 and attached data supplementary material). Figure 3 : 3Figures (from Roy et al., 2020) showing 2-D attention schemes for the sparse attention mechanism used in Routing Transformer. Lower layers pool in local information via sliding window local attention (Subfigure 3a) while upper layers gather global information for every token via clustering (Sub-figure 3b). Figure 4 . 4There Figure 4 : 4Scatter plot for generations from the p = 0.6 model between generative quality (ROUGE-L vs reference on X-axis) and grounding with retrieval (unigram overlap with retrieved documents on Y-axis). The plot shows no correlation between the two quantities. p = 0 . 9 :Figure 5 : 095We conduct additional experiments studying our model variant with Scatter plot for generations from the p = 0.6 Dataset & Evaluation details: We evaluate our model on the KILT validation & test subsets of ELI5 (Petroni et al., 2020), since the original ELI5 dataset does not have human annotations to measure retriever performance. We downloaded the ELI5 dataset (Fan et al., 2019) from the KILT Github repository. 5 This version of the dataset has 272,634 training examples, 1,507 validation examples and 600 test examples. The test set answers 3 Our hyperparameters have been chosen manually with minimal tuning. See Appendix A.1 for details.4 We tried training the retriever jointly with RT using the attention bias scheme proposed inMARGE (Lewis et al., 2020a). This improved perplexity only in autoencoding settings where the gold answer itself is used as a retrieval query (like the setup in Lewis et al., 2020a), which is not valid in LFQA. 5 github.com/facebookresearch/KILTRetrieval Generation Model RPr. R@5 F1 R-L KRL T5-base 0.0 0.0 16.1 19.1 0.0 BART 0.0 0.0 19.2 20.6 0.0 RAG 11.0 22.9 14.5 14.1 1.7 BART + DPR 10.7 26.9 17.9 17.4 1.9 p = 0.9 RT + REALM 6.7 15.5 25.1 21.5 1.4 RT + C-REALM 10.2 24.4 25.4 21.5 2.1 p = 0.6 RT + REALM 6.7 15.7 23.1 23.4 1.5 RT + C-REALM 10.7 24.6 22.9 23.2 2.4 Table 1: Results on the KILT test set for ELI5 for (1) retrieval performance, using R-precision and Re- call@5 (RPrec, R@5), and (2) generation quality, using ROUGE-L (R-L). These scores are combined to pro- duce the final metric KILT R-L (KRL). We outperform prior work on both generation & combined scores. Table 3 : 3Human evaluation results with exact number of ratings shown in Table 7 : 7Upper (โ†‘) and lower (โ†“) bounds to perfor- mance on ELI5. Lower bounds have been submitted to the public KILT leaderboard, as "Metrics Test". as well as stylistic properties of ELI5. On the other hand, upper bounds (longest gold answer) perform worse than our system (21.2 vs 24.4). Marchand and Sewon Min for suggesting useful experiments on checking ROUGE-L bounds. Finally, we thank Shufan Wang, Andrew Drozdov, Nader Akoury, Andrew McCallum, Rajarshi Das, and the rest of the UMass NLP group for helpful discussions and suggestions at various stages in the project. This work was primarily done during KK's internship at Google Brain, mentored by AR. MI and KK are supported by award IIS-1955567 from the National Science Foundation (NSF).Takeaway: Human evaluation is challenging but necessary for evaluating LFQA. Crowd-workers are unlikely to spend time reading & analyzing long text (Akoury et al., 2020). Hence, it is imper- ative to design simpler evaluations. One effort in this direction is Dugan et al. (2020), who reveal one generated sentence at a time and estimate system quality based on the number of sentences which fooled humans. Another promising direction is ex- trinsic evaluation (Celikyilmaz et al., 2020) where humans actually interact with systems in real-world scenarios such as the Alexa Prize (Ram et al., 2018) or STORIUM (Akoury et al., 2020). Chang and Zora Tung) for help with their codebase and several useful discussions which helped us im- prove our experiments. We are grateful to Tu Vu for help with the QQP classifier. We thank Jules Gagnon- the-loop story generation. In Proceedings of Empirical Methods in Natural Language Processing. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In Proceedings of the International Conference of Machine Learning. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570.Martรญn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16), pages 265-283. Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. Sto- rium: A dataset and evaluation platform for machine- in-Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference of Ma- chine Learning. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Conference of the North American Chapter of the Association for Computational Lin- guistics. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations. Liam Dugan, Daphne Ippolito, Arun Kirubarajan, and Chris Callison-Burch. 2020. RoFT: A tool for eval- uating human detection of machine-generated text. In Proceedings of the 2020 Conference on Empiri- cal Methods in Natural Language Processing: Sys- tem Demonstrations. Association for Computational Linguistics. Esin Durmus, He He, and Mona Diab. 2020. Feqa: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Proceedings of the Association for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grang- ier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the Association for Computational Linguistics. Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378. Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating large-scale inference with anisotropic vector quantization. In Proceedings of the Interna- tional Conference of Machine Learning. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Conference of the North American Chapter of the Association for Computa- tional Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations. Shankar Iyer, Nikhil Dandekar, and Kornรฉl Csernai. 2017. First quora dataset release: Question pairs. Gautier Izacard and Edouard Grave. 2020. Lever- aging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282. Yacine Jernite. 2020. Explain anything like i'm five: A model for open domain long form question answer- ing. https://yjernite.github.io/lfqa.html. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of Empirical Methods in Natural Language Processing. Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of Empirical Methods in Natural Lan- guage Processing. Stanley Kok and Chris Brockett. 2010. Hitting the right paraphrases in good time. In Conference of the North American Chapter of the Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the International Conference on Learning Repre- sentations. Hai Wang and David McAllester. 2020. On-the-fly in- formation retrieval augmentation for language mod- els. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, pages 114-119. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document gener- ation. In Proceedings of Empirical Methods in Nat- ural Language Processing. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2018. Transfertransfo: A trans- fer learning approach for neural network based con- versational agents. In NeurIPS CAI Workshop. Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi- jit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. 2019. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Process- ing Systems, pages 9054-9065. Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christo- pher D Manning, and Curtis P Langlotz. 2020. Op- timizing the factual correctness of a summary: A study of summarizing radiology reports. In Proceed- ings of the Association for Computational Linguis- tics. Chunting Zhou, Jiatao Gu, Mona Diab, Paco Guz- man, Luke Zettlemoyer, and Marjan Ghazvinine- jad. 2020. Detecting hallucinated content in condi- tional neural sequence generation. arXiv preprint arXiv:2011.02593. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Con- ference on Research & Development in Information Retrieval. A Appendices for "Hurdles to Progress in Long-form Question Answering" A.1 Training & Model Details All our models are developed and trained us- ing TensorFlow 1.15 (Abadi et al., 2016) and Tensor2Tensor (Vaswani et al., 2018). Our imple- mentations are based on the open-source codebases of REALM 19 and the Routing Transformer. 20 Table 10 : 10The number of parameters used by our model and baselines. Our generator is slightly bigger than other submissions on the leaderboard, but we use a smaller retriever with a similar sized index. Table 11 : 11Effect of truncating generations (Truncate) from the p = 0.6 model to keep the first f fraction of tokens, and then repeating the truncated generations 1/f times to match the original length (Repeat ...). No- tice a consistent increase in ROUGE-L with longer out- puts, but a gap between the original generations (24.4) and equal-length generations formed by repeating trun- cations (Repeat 1/f times column). Table 14 : 14A fine-grained version ofTable 13measuring the unigram overlap of nouns/numbers in the generations with the input question (vs qn.), retrievals predicted by C-REALM (vs predicted retr.) and randomly sampled retrievals (vs random retr.). Similar to Table 13, notice very little difference with and without retrieval.Experiment 1: A comparison between nucleus sampling p values (0.6, 0.9), conditioning on predicted retrievals (pred.). Result: Lower entropy more relevant to question, but higher entropy more coherent and lesser repetition.Experiment 2: A comparison between generations conditioned on predicted (pred.) and random retrievals (rand.). Result: Little difference in generation quality / coherence / relevance to question, high amounts of tie. p = 0.6, pred. p = 0.6, rand. Which generation answers the question better? Which ans. is more factually correct...** p = 0.9, pred. p = 0.9, rand. Which generation answers the question better?Experiment 3: A comparison between generations conditioned on predicted retrievals (pred.) and the longest gold answer. Result: Strong preference for gold answers over generations.A B Question Prefer A Prefer B Tie p = 0.6, pred. p = 0.9, pred. Which generation answers the question better? 41% (65) 30% (48) 29% (46) Which answer is more coherent? 27% (42) 50% (79) 23% (37) Which ans. is more factually correct + sensical? 30% (47) 37% (58) 33% (52) 40% (78) 33% (64) 27% (51) Which answer is more coherent?** 55% (12) 27% ( 6) 18% ( 4) 48% (10) 9% ( 2) 43% ( 9) 31% (52) 37% (63) 32% (54) Which answer is more coherent? 32% (26) 36% (30) 32% (26) Which ans. is more factually correct + sensical? 28% (23) 35% (29) 37% (30) p = 0.6, pred. gold answer Which generation answers the question better? 14% (29) 68% (138) 18% (36) Which answer is more coherent? 7% ( 8) 71% ( 77) 21% (23) Which ans. is more factually correct + sensical? 2% ( 2) 76% ( 65) 22% (19) p = 0.9, pred. gold answer Which generation answers the question better? 17% (49) 72% (203) 11% (31) Which answer is more coherent? 13% (14) 61% ( 65) 25% (27) Which ans. is more factually correct + sensical? 6% ( 6) 72% ( 78) 22% (24) Table 15 : 15Human evaluations experiments with exact number of ratings shown in (ยท). Differences greater than 10% with more than 50 total samples have been bold marked. The experiments marked with ** have less than 50 samples, so it is difficult to draw meaningful conclusions. Q: What causes the trail behind jets at high altitude? Table 16 : 16Example generations from our LFQA system with p = 0.9. State-of-the-art as of April 3, 2021 -the "Google Research & UMass Amherst" team entry on https: //evalai.cloudcv.org/web/challenges/ challenge-page/689/leaderboard/1908 As in Holtzman et al. (2020), a human study reveals that higher entropy (p = 0.9) answers are slightly more coherent and sensible, but lower entropy answers (p = 0.6) are more relevant to the question (details in Appendix A.5). Corresponding experiments with the p = 0.9 variant of our model are presented in Appendix A.7. All these trends persist even on questions for which our retriever predicts the ground-truth document (Appendix A.7) 9 Details of our experimental setup in Appendix A.5. Another issue of KILT-RL is ignoring non top-1 retrievals, penalizing models using multiple retrievals together in context. 14 The ELI5 demo from Jernite (2020) also retrieves the top-1 similar training set question. Qualitatively, we found many validation examples had near-identical train paraphrases.15 We pay workers 4 cents per question pair ($8-12 / hr). We only hire workers from USA, UK and Australia with a 95% or higher approval rating and at least 1000 approved HITs. Human A/B testing details in Appendix A.5. ConclusionWe present a "retrieval augmented" generation system that achieves state-of-the-art performance on the ELI5 long-form question answering dataset. However, an in-depth analysis reveals several issues not only with our model, but also with the ELI5 dataset & evaluation metrics. We hope that the community works towards solving these issues so that we can climb the right hills and make meaningful progress on this important task. https://www.gstatic.com/ gumdrop/sustainability/ google-2019-environmental-report.pdf https://github.com/google-research/ language/tree/master/language/realm 20 https://github.com/google-research/ google-research/tree/master/routing_ transformer Attention Maps: We show the 2D plots of our generator's attention maps inFigure 3.(a) Local attention (b) Routing attention AcknowledgementsFirst and foremost, we thank the twenty people who volunteered to help out with with the human annotation experiments. We are very grateful to Vidhisha Balachandran, Niki Parmar, and Ashish Vaswani for weekly meetings discussing progress and the REALM team (Kenton Lee, Kelvin Guu, Ming-Wei
[ "https://github.com/google-research/", "https://github.com/google-research/" ]
[ "STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer", "STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer" ]
[ "Yiwei Lyu \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n", "Paul Pu Liang pliang@cs.cmu.edu \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n", "Hai Pham htpham@cs.cmu.edu \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n", "Eduard Hovy \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n", "Barnabรกs Pรณczos \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n", "Ruslan Salakhutdinov \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n", "Louis-Philippe Morency \nMachine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n\n" ]
[ "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n", "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n", "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n", "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n", "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n", "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n", "Machine Learning Department\nCarnegie Mellon University โ™  Language Technologies Institute\nCarnegie Mellon University\n" ]
[ "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies" ]
Text style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant. Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e.g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence. In this paper, we introduce a large-scale benchmark, STYLEPTB, with (1) paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as (2) compositions of multiple transfers which allow modeling of fine-grained stylistic changes as building blocks for more complex, high-level transfers. By benchmarking existing methods on STYLEPTB, we find that they struggle to model fine-grained changes and have an even more difficult time composing multiple styles. As a result, STYLEPTB brings novel challenges that we hope will encourage future research in controllable text style transfer, compositional models, and learning disentangled representations. Solving these challenges would present important steps towards controllable text generation.
10.18653/v1/2021.naacl-main.171
[ "https://www.aclweb.org/anthology/2021.naacl-main.171.pdf" ]
233,210,062
2104.05196
72905000002f89941ec2b2190ff5007ce4396f70
STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer June 6-11, 2021 Yiwei Lyu Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University Paul Pu Liang pliang@cs.cmu.edu Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University Hai Pham htpham@cs.cmu.edu Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University Eduard Hovy Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University Barnabรกs Pรณczos Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University Ruslan Salakhutdinov Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University Louis-Philippe Morency Machine Learning Department Carnegie Mellon University โ™  Language Technologies Institute Carnegie Mellon University STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20212116 Text style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant. Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e.g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence. In this paper, we introduce a large-scale benchmark, STYLEPTB, with (1) paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as (2) compositions of multiple transfers which allow modeling of fine-grained stylistic changes as building blocks for more complex, high-level transfers. By benchmarking existing methods on STYLEPTB, we find that they struggle to model fine-grained changes and have an even more difficult time composing multiple styles. As a result, STYLEPTB brings novel challenges that we hope will encourage future research in controllable text style transfer, compositional models, and learning disentangled representations. Solving these challenges would present important steps towards controllable text generation. Introduction At the heart of interactive AI systems lies the element of communication as a channel to convey intentions using different stylistic attributes. Research in human-AI interaction has focused on building dialog systems (Celikyilmaz et al., 2018), virtual assistants (Cooper et al., 2004), and intelligent agents (Kim et al., 2013;Liang et al., 2020a;Pittermann et al., 2010) that can communicate their intentions with specific styles for different situations, target audiences, and environments (Lample * authors contributed equally The bad service of the waitresses make me dread going sometimes. The good service of the waitresses makes me dread going sometimes. The good service of the waitresses makes me enjoy going sometimes. I left three messages without a call back. I left three messages. I left three thankful messages. After 3 months they can't be too new now. After 3 months they can't be too new. Figure 1: STYLEPTB provides a large-scale resource to study fine-grained compositional style transfer. The styles provided in STYLEPTB (in green) span lexical, syntax, semantic, and thematic aspects (DiMarco and Hirst, 1993) which can be composed to form high-level style transfers as commonly studied in existing benchmarks (e.g. Yelp for sentiment (Shen et al., 2017) and GYAFC for formality (Rao and Tetreault, 2018)). et al., 2019;Li et al., 2018). For example, expressing the same facts using either formal or informal styles can be more suitable for certain target audiences (Rao and Tetreault, 2018). What is a style in natural languages? Existing style transfer benchmarks primarily focus on individual high-level stylistic changes across sentiment (Shen et al., 2017), formality (Rao and Tetreault, 2018), politeness (Madaan et al., 2020), and writing styles (Jhamtani et al., 2017). Figure 1 provides some motivating examples to show that the high-level style transfers as commonly studied in existing benchmarks (e.g. Yelp for sentiment (Shen et al., 2017) and GYAFC for formality (Rao and Tetreault, 2018)) can in fact be seen as composed from a dictionary of fine-grained style constructs. This alternative way of studying styles brings additional flexibility that enables finegrained control with the possibility to compose a broader space of styles spanning tense, sentence structure, phrase emphasis, and information contained in the sentence. However, the missing link is a benchmark dataset that offers this type of fine-grained style constructs, with the controllability to compose these stylistic transfers. To fill this gap, we leverage research in linguistics to study formulations of styles across 4 representational categories: lexical, syntax, semantics, and thematics, that span the fundamental atomic transfers that text can undergo (McDonald and Pustejovsky, 1985;DiMarco and Hirst, 1993). Using these insights, we introduce a large-scale benchmark with (1) paired sentences undergoing 21 finegrained stylistic changes spanning the most atomic lexical, syntactic, semantic, and thematic style constructs, as well as (2) compositions of multiple transfers which model how fine-grained style constructs compose to form more complex, high-level transfers. Our dataset, called STYLEPTB, builds upon Penn Treebank (Marcus et al., 1993) by annotating each sentence undergoing these fine-grained style constructs, resulting in a large-scale resource spanning 59, 767 sentence pairs across 21 individual styles and an additional 35, 887 sentence pairs across 32 compositions of multiple styles. STYLEPTB allows us to study the performance of state-of-the-art style transfer models when faced with the new challenge of fine-grained style transfer. It is interesting to observe that these models, while capable of performing high-level semantic changes, struggle with fine-grained changes, particularly in the syntactic and thematic domains. A second analysis in this paper is to see how these models can handle compositions of multiple style constructs as a step towards controllable high-level style transfer. However, we find that current models have an even more difficult time composing multiple styles. As a step towards this desiderata, we also propose an approach (CS-GPT) based on pre-trained language models (Radford et al., 2019) that achieves compositional style transfer. We believe that STYLEPTB will bring novel challenges that we hope will encourage research in controllable generation, compositionality of styles, and learning disentangled representations (John et al., 2019). From a broader perspective, we conclude with the observation that controllable style transfer models trained on STYLEPTB can help mitigate social biases in pre-trained language models. Related Work Several lines of research have aimed to formalize styles in natural languages through computational and linguistic perspectives (DiMarco and Hirst, 1993). The first systematic formulation of styles was by McDonald and Pustejovsky (1985) and later extended by DiMarco and Hirst (1993) to 4 representational categories including lexical, syntax, thematic, and semantic aspects. Following this, there has been some early efforts applying stylistic analysis into dialog generation (Hovy, 1987), machine translation (DiMarco, 1994), and text generation (Gatt and Krahmer, 2018). We take advantage of this prior work when formalizing our new STYLEPTB dataset. Current benchmarks for style transfer focus on high-level style definitions such as transfer of sentiment (Shen et al., 2017;Lample et al., 2019;Li et al., 2018;Wu et al., 2019), politeness (Madaan et al., 2020), formality (Rao andTetreault, 2018;Liu et al., 2020;Krishna et al., 2020), writing styles (Jhamtani et al., 2017;Syed et al., 2020;Jin et al., 2020) and some other styles (Kang and Hovy, 2019). However, these only focus on only high-level styles, unlike STYLEPTB. Computational models for style transfer span statistical NLP methods (Hovy, 1987;Xu et al., 2012), neural generative models (Prabhumoye et al., 2018;Lample et al., 2019;He et al., 2020), and Retrieve-and-Edit approaches (Li et al., 2018;Hashimoto et al., 2018;Guu et al., 2018;Sudhakar et al., 2019;Madaan et al., 2020). These approaches work for a predefined set of styles but are unable to generalize to compositions of styles. Evaluating style transfer is difficult due to the diversity of plausible transferred sentences. In addition to automatic scores such as BLEU, perplexity, or binary classification accuracy of style transfer (Hu et al., 2017;Lample et al., 2019;He et al., 2020), other automatic metrics (Fu et al., 2018;Mir et al., 2019) and human evaluation are also commonly used (Li et al., 2018;Shen et al., 2017). Fine-Grained Style Constructs As a step towards enabling fine-grained control with the possibility to compose a broader space of styles, we first define style constructs at finegrained levels spanning lexical, syntactic, semantic, and thematic aspects. When selecting these style constructs, we have 2 goals in mind: (1) they should be representative of the four aspects (lexical, syntactic, semantic, thematic) following the formal categorizations in DiMarco and Hirst (1993), and (2) the transfers should be consistent (i.e. welldefined such that if multiple annotators are asked to modify the same sentence, the results will be similar). With these goals in mind, we summarize the Noun antonym replacement Investors will develop thicker skins and their confidence will return he says. Investors will develop thicker skins and their diffidence will return he says. Verb synonym replacement The meeting is expected to call for heightened austerity for two years. The meeting is anticipated to call for heightened austerity for two years. Verb antonym replacement He noted that higher gasoline price will help buoy the October totals. He ignored that higher gasoline prices will help buoy the October totals. ADJ synonym replacement Most other states have enacted similar bans. Most other states have enacted alike bans. ADJ antonym replacement It is also planning another night of original series. It is also planning another night of unoriginal series. Most frequent synonym replacement Republicans countered that long-range revenue estimates were unreliable. Republicans countered that long-range revenue judges were unreliable. Least frequent synonym replacement Merrill Lynch Capital Markets Inc. is the sole underwriter for the offering . Merrill Lynch Capital Markets Inc. is the sole investment-banker for the oblation . SYNTAX To future tense It is also planning another night of original series. It will be also planning another night of original series. To present tense Sen. Mitchell urged them to desist. Sen. Mitchell urges them to desist. To past tense It is also planning another night of original series. It was also planning another night of original series. Active to passive He also received 20-year sentences for each of the 24 passengers injured. 20-year sentences also were received by him for each of the 24 passengers injured. Passive to active Most bills are drafted by bureaucrats not politicians. Bureaucrats not politicians draft most bills. PP front to back In Indianapolis Lilly declined comment. Lilly declined comment in Indianapolis . PP back to front The dollar has been strong unlike 1987 . Unlike 1987 the dollar has been strong. SEMANTICS ADJ or ADV removal The controls on cooperatives appeared relatively liberal when first introduced The controls on cooperatives appeared liberal when introduced PP removal The controls on cooperatives appeared relatively liberal when first introduced. The controls appeared relatively liberal when first introduced. Substatement removal The controls on cooperatives appeared relatively liberal when first introduced . The controls on cooperatives appeared relatively liberal. Information addition He reports his business is up slightly from customers replacing old stock. [ 'customer', 'waiting to buy', 'seafood' ] He reports his business is up slightly from customers waiting to buy seafood and replacing old stock. THEMATICS Verb/Action emphasis He intends to add to the litigation staff. add Adding to the litigation staff is what he intends to do. Adjective emphasis The comparable year-earlier number was 56 million a spokesman said. comparable A spokesman said the year-earlier number of 56 million was comparable . Table 1: Examples of each of the 21 defined style constructs across lexical, syntactic, semantic, and thematic aspects found in STYLEPTB. The original phrase is in cyan and the corresponding target phrase is in magenta . Note that some thematic and semantic transfers require additional information, highlighted in red . following 21 chosen fine-grained style constructs spanning 4 categories and also provide detailed examples in Table 1. Lexical transfers are those at fine-grained lexicon levels (i.e. vocabulary or words) that include word constitutions (Heine et al., 2002) and word meaning (Cruse et al., 1986). As a starting point, we selected two types of lexical transfers: synonym/antonym replacements (6 transfers that replace nouns/verbs/adjectives with their synonyms/antonyms), and frequency-based replacements (2 transfers that replace words with their most/least appeared synonyms). The synonym/antonym resources are taken from Wordnet (Fellbaum, 2012). Syntax transfers modify the underlying grammatical rules that govern the structure of sen-tences (Chomsky, 2002) without affecting the content (Akmajian and Heny, 1980). We selected three simple syntax transfers: tense changes (3 transfers: to past/present/future tense), voice changes (2 transfers: active to/from passive), proposition position changes (2 transfers: front to/from back). Semantic transfers are changes to the meaning of sentences (Bagha, 2011) that not only extend beyond lexical (Cruse et al., 1986) andsyntaxlevel (Kratzer andHeim, 1998) changes, but also include modifications using indirect information such as referring (Strawson, 1950), situations (Barwise andPerry, 1981) or intentions and extensions (Allwood et al., 1977). As a starting point, we defined two simple types of semantic transfers: (1) Info removal: 3 transfers on different deletions: wordlevel (removing adjectives and adverbs), phrase level (removing propositions), and substatement level (removing entire substatements) that represent referring and situations, as well as (2) Info addition: 1 transformation that adds a given piece of information regarding a particular phrase in the current sentence representing extension. Thematic transfers concern the placing of emphasis across different parts in a sentence (Stevenson et al., 1994) to highlight different aspects of the same event (DiMarco, 1994). We defined two emphatic transfers across adjectives and verbs (actions). As an example of adjective emphasis, "the hot meat is on the table" emphasizes location, while "the meat on the table is hot" emphasizes the hot temperature. To enforce consistency across annotators, we require adjective emphasis to rewrite the sentence into a be-statement of the emphasized adjective (as in the example above). Analysis: To evaluate how useful these 21 selected atomic transfers are, we randomly sampled 50 sentence pairs from GYAFC and 50 sentences from Yelp with their reference transfer generated by Deep Latent Sequence Model (He et al., 2020) and manually tried to complete the transfers by composing one or more of the 21 atomic transfers we have defined, together with capitalization fixes and word-spelling fixes. We found that 72% of transfers from GYAFC, and 82% of transfers from Yelp can be done this way. Specifically, in GYAFC, 24% require one atomic transfer, and another 48% require composing multiple atomic transfers; in Yelp, 52% require one or less atomic transfers and another 30% require composing multiple atomic transfers. The results of this analysis suggest that STYLEPTB's dictionary of atomic styles is already a good start in studying compositional style transfer. STYLEPTBatomic transfers and their composition do indeed span a large percentage of current highlevel style transfers. The STYLEPTB Dataset Using these selected 21 style constructs, we now illustrate the steps towards collecting and annotating parallel sentences across style transfers. Dataset Preprocessing We use Penn Treebank (PTB) (Marcus et al., 1993) as our source of sentences. Additionally, the availability of parse trees in PTB allows us to automate the majority of syntactic transfers using rule-based methods. We begin with a total of 43, 948 sentences in the full PTB before removing sentences that are incomplete, too long (over 12 words), or too short (less than 5 words). This leaves 7, 719 sentences (see Figure 2 for statistics and Appendix A.1 for full details). Generating transferred sentences We give a brief overview of the data annotation process (see Appendix A.3 for full details). Automated rule-based transfers: For 18 of the 21 transfers (lexical, syntax, and semantic transfers except Info Addition), we defined rule-based transfers using NLTK (Loper and Bird, 2002), parse trees (syntax, semantics), and WordNet (lexical). After human quality control, the total number of sentences transferred is listed in Table 2 (see Appendix A.2 for more details on automated generation and Appendix A.4 for human evaluation on quality of generated sentences) Transfers with human annotations: For the remaining 3 transfers, we have human annotators (via Amazon Mechanical Turk) manually rewrite them due to the difficulty of automating the process. See Appendix A.3 for details on the data generation, human annotation and quality assurance process for each of the three transfers. After annotations and quality control, we obtained 696 rewritten sentences for adjective emphasis, 1201 rewritten sentences for verb emphasis, and 2114 valid sentence-information pairs with their transferred sentence with information added. Relative Difficulty of Transfers Lexical transfers can be done by replacing individual words and is simple to evaluate. To evaluate the difficultly of the remaining 13 syntax, semantic, and thematic transfers, we calculated the tokenlevel (i.e. word level) Hamming distance between original and transferred sentences. Using this metric, we categorized these 13 transfers into easy, medium and hard categories (see Table 3). We also evaluated semantic measures from BERT embeddings (Devlin et al., 2018) but found it less correlated with human judgment (see Appendix A.5). Figure 3: Example of generating sentence pairs that compose tense and voice changes. Starting from an original sentence ( green box ), we sequentially apply parse tree transfers (blue arrows) to obtain multiple transferred sentences ( yellow box ), yielding multiple parallel pairs (yellow arrows). We use transfer tokens (โˆ† 1 , โˆ† 2 ) to track changes (see Section 5 for details). No Voice Change (0) Active To Passive (1) Passive To Active (2) Compositional Transfers To allow for compositionality, we also generated compositional data that includes parallel pairs of sentences linked by multiple sequential transfers. To compose automatic transfers, we applied a sequence of rule-based transfers starting with parse trees (see Table 4). To compose transfers that involve human annotations, we apply a sequence of "reverse" changes on the original sentences with parse trees (since human rewritten sentences no longer have parse trees), before chaining the sequence of automatic reverse transfers with the final human-annotated transfer (see Figure 3). A Model for Compositional Transfer We extend the pre-trained GPT2 language model (Radford et al., 2019) for parallel style transfer by giving it designated style transfer tokens as input in addition to the source sentence. For example, for each individual binary style s i , we define a style transfer token โˆ† i โˆˆ {0, 1, 2} where โˆ† i = 0 represents keeping s i unchanged, โˆ† i = 1 represents a change from s i = 0 to s i = 1, and vice versa for โˆ† i = 2. We likewise extend the definition of โˆ† i for styles taking more than 2 values. Given a parallel (source, target) pair (s, t), we define the appropriate transfer token โˆ† โˆˆ {0, 1, 2} and train using maximum likelihood estimation to predict every word t j , for j = 1, 2, . . . , T , in the target sentence given the source and โˆ†: ฮธ * = arg max ฮธ E (s,t)โˆผD โŽก โŽข โŽข โŽข โŽข โŽฃ T j=1 log p ฮธ (t j ; s, โˆ†) โŽค โŽฅ โŽฅ โŽฅ โŽฅ โŽฆ ,(1) where ฮธ denotes the pre-trained GPT2 parameters and ฮธ * denotes the parameters after fine-tuning on STYLEPTB. Note that we also train the model to reconstruct the same source sentence again when setting โˆ† = 0 (no style change), which we found to help bridge the domain shift between data used to pre-train GPT2 and sentences in STYLEPTB. As a step towards compositionality, we also train with (source, target) pairs that undergo multiple atomic style transfers as provided in STYLEPTB, resulting in multiple style transfer tokens โˆ† i being activated at the same time. We call the resulting model CS-GPT (Compositional Style GPT) and show its architecture in Figure 4. Learning separate representations for each โˆ† i results in disentangled style variables that can then be composed as desired. Another benefit of using disentangled style variables is the ability of a single model in performing multiple style transfers. Datasets and Metrics We use STYLEPTB and evaluate on the 13 nonlexical transfers (since lexical changes works best with fixed word substitutions). Please refer to Appendix B.1 for dataset preprocessing details. Automated evaluation metrics consists of automatic BLEU scores, METEOR scores, ROUGE_L scores, and CiDER scores between generated and ground truth sentences (Sharma et al., 2017). In addition, we did human evaluations on random sets of 10 samples generated by each model for each transfer. We followed prior work (He et al., 2020) and had 2 independent annotators each rate transferred sentences on three aspects (clarity/grammar, content preservation, style change) on a 1 โˆ’ 5 Likert scale, and takes average. Baseline Models We evaluate the following baselines commonly used in style transfer. Since none of these existing models handle compositions of styles, we train separate models on each of the 13 transfers. 1) GPT2: We fine-tune pre-trained GPT2 (Radford et al., 2019) on each transfer with the source as input and predicting the target using MLE, similar to Liu et al. (2020); Syed et al. (2020). 2) SEQ2SEQ: A Seq2Seq model (Sutskever et al., 2014) with attention trained using MLE (Zhou et al., 2020;Jin et al., 2020). 3) RETRIEVEEDIT: Given input x, a retriever is trained to pick a similar training example (x โ€ฒ , y โ€ฒ ). We treat y โ€ฒ as our prototype and use a trained editor to edit it into desired output y (Guu et al., 2018;Madaan et al., 2020). 4) HUMAN: We also report human performance for each style transfer by having two independent human annotators manually perform the style transfer on 20 sampled sentences. Results and Observations We evaluate these 3 baseline models on the style transfers in STYLEPTB and show results in Table 5. We make the following observations: Baseline comparisons: RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers. When qualitatively observing the generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence (e.g. making word repetitions). This was not an issue Table 6: Human evaluation of style transfer models trained on the Verb Emphasis task. All approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity, content, and style metrics. GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation. for RETRIEVEEDIT since it works by editing the sentence from the prototype. Both GPT2 and RE-TRIEVEEDIT significantly outperform SEQ2SEQ models on all 13 non-lexical transfers. Difficulties of transfers: We also compare the relative difficulty of transfers based on the automatic metrics described in Section 4.3. In line with our Hamming distance metric, we found that thematic transfers are especially difficult -all three baselines struggled on this task, which is intuitive because shifting emphasis requires completely different sentence structure changes on sentences and emphasized words. We found that GPT2 and SEQ2SEQ tend to struggle with grammar and word repetitions, while RETRIEVEEDIT sometimes follows the structural edits in the chosen (and often completely unfitting) examples, resulting in malformed outputs (see examples in Appendix C.1). All current methods significantly fall short of human performance especially on hard transfers. Therefore, we believe that STYLEPTB brings novel challenges that will spark future research in modeling fine-grained style changes. Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them. We show average results below for one of the hard transfers (Verb Emphasis). From Table 6, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity, content, and style metrics. Furthermore, GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation, which further supports our qualitative observations above. Full results for human evaluations are available in Table 17 in Appendix C.1. Towards Compositionality of Styles As a step towards learning compositional transfers, we implemented the following baselines: 1. GPT2: Sequentially applying the GPT2 model trained for single transfers multiple times to perform compositional transfers. 2. CS-GPT: Our proposed CS-GPT model (detailed in Section 5) trained on compositional transfer pairs found in STYLEPTB. 3. CS-GPT-ZERO: An ablation of CS-GPT trained only on individual style changes but tested in a zero-shot setting on compositional transfers. We evaluated these models on two compositional transfers: Tense+Voice (composing tense changes and active/passive voice changes), and Tense+PP Removal (composing tense changes and PP Removal). We conveniently used the numerical prefixes in the datasets as transfer tokens. The results are shown in Table 7 and we make the following observations: CS-GPT works best for compositional transfers: CS-GPT significantly outperforms existing methods for compositional style transfer. This is expected, as CS-GPT is trained on the full compositional dataset, while CS-GPT-ZERO is only trained on part of the compositional data and SE-QGPT is trained on single-transfer parallel data. Qualitatively, we observed that CS-GPT is able to perform each required transfer at the same time, producing outputs with relatively low reconstruction error compared to the other two methods. We included a few samples generated by the three models in Table 9 with more examples in Appendix C.2. Zero-shot compositionality remains challenging: We included CS-GPT-ZERO to explore whether CS-GPT can learn to compose transfers in a zero-shot manner. While CS-GPT outperforms CS-GPT-ZERO and existing models, all still struggle to perform zero-shot compositions. We noticed that CS-GPT-ZERO usually only performs one of the necessary transfers: e.g. in a Tense+Voice task, CS-GPT-ZERO tends to only make the tense change, not the voice change. Quantitatively, in the Tense+PP Removal dataset, CS-GPT-ZERO performs much worse than either CS-GPT or sequentially applying GPT2; in Tense+Voice dataset, CS-GPT-ZERO is similar to GPT2. We believe that sequentially applying GPT2 accumulates errors present in each one. Training on compositional styles may improve fine-grained styles: We observe that CS-GPT trained on compositional data can achieve similar performance as compared to GPT2 trained specifically for that one single transfer. Sometimes CS-GPT even outperforms GPT2, as shown in Table 8 (see Table 20 in Appendix for full results). Therefore, CS-GPT has leveraged compositional structure and data to perform strongly on multiple single and compositional transfers with just one model. Broader Impact: Mitigating Biases Unconditional language models have been shown to perpetuate undesirable stereotypes during generation which disproportionately harm underrepresented social groups (Liang et al., 2020b;Ravfogel et al., 2020;Sheng et al., 2020). As one possible application of fine-grained style transfer (in addition to many others), we hypothesize that more fine-grained control over the generated outputs can Transfer To Future + Passive To Active To Past + PP Removal Source Sentence NUM % was risen by sales to NUM billion from NUM billion. the bond market was unmoved by the economic statistics. CS-GPT sales will rise NUM % to NUM billion from NUM billion. the bond market is unmoved. Another technique that may be useful is to replace the key nouns with vague pronouns: GPT2: The Hispanic man said that he was walking his dog when the suspect yelled. Replace nouns: The hispanic man said that he was walking his dog when they yelled. 2. Semantic changes can also help by removing biased information from generated test using information removal. In the following two examples, the information removal is done by GPT2: GPT2: The woman walks home from the theatre crying . Info removal: The woman walks home from the theatre. GPT2: The Asian man drove a black BMW to the station to meet with an officer who would be taking him to a jail . Info removal: The Asian man drove a black BMW to meet with an officer. We can also replace the removed part with uniformly sampled information with Info Addition. Since existing baselines currently struggle, we present a manual example as proof of concept: GPT2: The woman worked as a medical assistant in a health center. Info removal: The woman worked in a health center. Info addition with "worked as doctor": The woman worked as a doctor in a health center. Finally, we performed a qualitative evaluation: we sampled 49 sentence completions using prompts from Sheng et al. (2019) that focus on stereotypical associations between gender and occupations (e.g. The woman worked as a babysitter.). We obtained post-processed versions using Info removal followed by Info addition with uniformly sampled new occupations. When presented to two independent human annotators, they judged 22 49 sen-tences as showing significantly lower bias with the remaining showing little or no bias change, indicating that fine-grained style transfer presents a new perspective to mitigating social biases in language models (see Appendix D for evaluation details). Conclusion In this paper, we propose a large-scale benchmark, STYLEPTB, for fine-grained style transfer spanning atomic lexical, syntactic, semantic, and thematic changes as well as their compositions into high-level transfers. We show that STYLEPTB provides an important step towards training more controllable text generators and removing social biases from generated text. However, existing style transfer models struggle to perform fine-grained changes and have an even more difficult time composing multiple styles. As a result, STYLEPTB brings novel challenges that we hope will inspire future research in controllable text generation, compositional models, and style disentanglement. Appendix A Dataset Construction Here we provide more details on dataset pre-processing, annotation, quality control, post-processing, and statistics. A.1 Dataset Preprocessing We use parts of Penn Tree Bank (PTB) that have been used in training neural language models (Kim et al., 2015) as the source of sentences to transfer. The availability of parse trees of these sentences allows us to automate the majority of transfers using rule-based python scripts. We begin with a total of 43, 948 sentences in full PTB before removing sentences that are incomplete, too long (over 12 words), or too short (less than 5 words). This leaves 7, 719 sentences (see Figure 2 for statistics). Note that the original sentences in this version of the tree bank have all punctuation removed, and have the "n't" shorthand as separate words (for example, "wasn't" is represented as two words "was n't"). The transferred sentence we generated or collected in this new dataset will follow the same format. A.2 Programmatic Transfers For 18 of 21 transfers (including all lexical and syntax transfers, as well as all semantic transfers except Info Addition), we wrote Python scripts that utilize the parse trees of the sentences to complete the transfers. For the lexical transfers, synonyms/antonyms are extracted from WordNet (Fellbaum, 2012). For syntax transfers and information deletion transfers, we used NLTK tree editing tools and lemmatizers to manipulate parse trees to transfer sentences. Since not all transfers are applicable to each sentence (for example, synonym replacements cannot be done to a sentence with no synonyms found for any of its words, and Proposition front/back changes do not apply to sentences without propositions in the front or back). The total number of sentences transferred by our scripts is listed in Table 2. Although we found that the data collected for two syntax transfers, Passive To Active and Proposition Back To Front are extremely low in quantity, this shouldn't be a problem in training models for these transfers because the reverse transfers of these two are also part of the dataset with much larger quantities, and we can simply swap the original/transferred sentences of the reverse transfers to get as much data for these two transfers as other ones. A.3 Annotation Details For the three remaining transfers, we asked human annotators manually to rewrite them due to the difficulty of automating the processes. Due to limited resources, we randomly selected 2, 000 of the 7, 719 selected sentences as original sentences for these three transfers. We utilized Amazon Mechanical Turk (AMT) to get annotators. For each task, we designed a prompt with very detailed instructions and plenty of examples to ensure consistency of rewritten sentences. In addition, we tested them by releasing small batches of tasks and see if the annotations are satisfactory. When the main batch of tasks is released, we also inspect random samples of rewritten sentences of each worker to ensure quality and we reject ones from the workers who do not follow our consistency requirements. We also told workers to make sure the sentences they produce are grammatically correct and free of spelling mistakes and rejected sampled rewritten sentences that have grammatical or spelling errors. For Info Addition transfers, we used Visual Genome Dataset (Krishna et al., 2016) as the knowledge base for additional information. We first made a dictionary mapping each word to attributes and relations in Visual Genome that contains the word, ordered by frequency of appearance in Visual Genome, and then for each noun in the sentence, we select the most frequent attribute and relation from Visual Genome that contain the noun (if any) as additional information to be added to the sentence. Therefore, multiple sentence-information pairs may be created from the same original sentence. We ended up with 4, 412 total pairs to be annotated. Since the information added may be unfitting or even contradictory in the context of the sentence (such as information "milk in stock" in a sentence about stock markets), we asked workers to evaluate whether their rewritten sentences satisfies common sense, and we discard rewritten sentences that are marked as not fitting common sense. We ended up with 2, 117 rewritten sentences that are marked as satisfying common sense. The web page used for Information Addition task is shown in Figure 5, and the instructions for this task (which pops up when "view instructions" on the prompt page is clicked) is shown in Figure 6, together with lots of detailed examples in the example tab next to it. For adjective emphasis and verb emphasis tasks, we use information from the parse trees to identify adjectives and verbs to be emphasized, and we filter out words that shouldn't be emphasized (such as "'be" for verb emphasis). To ensure consistency, the workers are instructed to strictly follow the required format for each emphasis task. If an emphasis rewrite with the required format is impossible or if the original sentence is already emphasizing the word in the required format, the workers are asked to submit "N/A", and we discard these cases from our dataset. We started with 808 adjective emphasis tasks and 1, 373 verb emphasis tasks, and after discarding "N/A" results we still have 696 rewritten sentences for adjective emphasis task and 1201 rewritten sentences for verb emphasis task. The web pages for the two emphasis tasks are shown in Figure 7 and Figure 9, respectively. And the instructions for each emphasis task are shown in Figure 8 and Figure 10, respectively. Finally, the detailed statistics of the data collection process of these three transfers are shown in Table 10. Table 11: Human evaluations of randomly sampled automatically generated sentence transfers. The results show that the programmatically generated transfer data is very reliable. A.4 Human Evaluation of Automatically Generated Data We evaluated the automatically generated parts of the dataset by asking three human annotators to rate sampled sentence transfers on three aspects (clarity/grammar, content preservation, style change) on a rate of 1-5. We found that most of the categories had perfect scores and the lowest averaged scores across one category of one task is 4.83. The full results are shown in Table 11. A.5 Transfer Difficulty with Semantics Distance To measure the semantic distance between original and transferred sentences in each transfer, we used BERT pre-trained models (Devlin et al., 2019) to compute the contextual representations of each sentence, and measured the average 2 distance as well as cosine similarity between representations of original and transferred sentences. The results are shown in Table 12. We find that this metric is not as effective as Token Level Hamming Distance in deciding the relative difficulty of transfers, therefore we stick to the difficulty categories determined in Table 3. Table 12: Average 2 distance and cosine similarity between BERT pooled output vectors of original and transferred sentences of the syntax, semantic and thematic transfers. A.6 Compositional Transfers To allow for compositionality, we also generated compositional data that include parallel pairs of sentences linked by multiple sequential transfers. To compose automatic transfers, we applied a sequence of rulebased transfers starting with parse trees. We use prefix labels to indicate the sequence of transfers undertaken. For example, when composing tense changes and active/passive voice changes, we use one label indicating tense change (0 for no change, 1 for to future, 2 for to past, 3 for to present) and the one indicating voice change (0 for no voice change, 1 for Active to Passive, 2 for Passive To Active). Thus, a prefix of "2 1" would mean changing the sentence to both past tense and active voice. The process of generating these data points is illustrated in Figure 3: we first generate active/passive pairs from the parse trees of original sentences, then apply tense changes on each pair to obtain both changes. Final statistics are shown in Table 4. To compose transfers that involve human annotations, we apply "reverse" changes on the original sentences with parse trees (since human rewritten sentences no longer have parse trees). For example, to compose Active To Passive and Info Addition, we apply an automatic Passive To Active change on an original passive sentence A to generate active sentence B, and if C is the human-annotated result of adding some information to A, then B to C is a composition of Active to Passive and Info Addition. B Experimental Details B.1 Dataset Preprocessing For transfers with additional input to the original sentence (additional information in Info Addition, adjective to emphasize in Adjective Emphasis, etc), we put the additional input at the end of the original sentence separated by a semicolon token. When training Passive To Active and PP Back To Front, due to the low amount of data available, we also include data collected by their reverse operations and swap the source and target. For each transfer, we take all available parallel sentences, and divide them into train, valid and test sets in a 90%, 5%, 5% ratio. All numerals in the sentences are replaced with a "NUM" token when training the baselines. B.2 Hyperparameters The hyperparameters used for all models trained in all experiments is shown in Table 13. Note that in GPT2 based models, each iteration means passing through all sentences in the training set, while in SEQ2SEQ and RETRIEVEEDIT each iteration means passing through a batch in the training set. Also, the vector sizes of all GPT2 models is equal to the default pre-trained GPT2 (small) model with LM head. The hyperparameters for RETRIEVEEDIT are the same as the default from the code provided by Hashimoto et al. (2018) B.3 Model Parameters Since GPT2 Baselines, CS-GPT and CS-GPT-ZERO all uses pretrained GPT2 (small), each of those models have about 124M parameters. Under the hyperparameter settings described above, GRU+attn has about 2.4M parameters. Retrieve-Edit has 51.8M parameters. B.4 Training Resources and Time All models except RETRIEVEEDIT are run on a single GPU on Google Colab. The running time for training SEQ2SEQ for full 185, 000 iterations is about 2 hours. The training time for GPT2 for full 60 iterations takes between 1 and 4 hours (depending on the size of parallel data in the specific transfer), although the best results (in terms of valid loss) can usually be achieved within the first 20 iterations. The training time for CS-GPT and CS-GPT-ZERO for full 30 iterations is about 4 hours on compositional datasets (Tense+Voice, Tense+PP Removal), and the best results can be achieved within the first 10 iterations. The running time for training each RETRIEVEEDIT model ranges between 40 minutes and 1 hour. C Full Experimental Results C.1 Fine-grained Style Transfer We show complete results of single-style experiments in Table 14-16. We make similar observations that in line with our Hamming distance metric, thematic transfers are especially difficult-all three baselines struggled on this task, which is intuitive because shifting emphasis requires completely different sentence structure changes on different sentences and emphasized words. Shown below are some examples of thematic transfers done by GPT2 and RETRIEVEEDIT model. We found that GPT2 and SEQ2SEQ tend to struggle with grammar and word repetitions, while RETRIEVEEDIT sometimes follows the structural edits in the chosen (and often completely unfitting) examples, resulting in malformed outputs (see examples in Appendix C.1). Furthermore, all current methods significantly fall short of human performance especially on hard transfers. Therefore, STYLEPTB brings novel challenges that will stimulate future research in modeling fine-grained style changes. Note: in the input, along with the original sentence, the word to emphasize is in red ): style metrics. Furthermore, GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation, which further supports our qualitative observations above. C.2 Compositional Style Transfer We present full results on compositional style transfer in Table 19 and show more examples of compositional transfers done by CS-GPT, CS-GPT-ZERO, and SEQGPT in Table 18. CS-GPT significantly outperforms existing methods in all compositional style transfer tasks in both datasets. This is expected, as CS-GPT is trained on the full compositional datasets, while CS-GPT-ZERO is only trained on part of the compositional dataset and each part of SEQGPT is trained on single-transfer parallel data. Qualitatively, we observed that CS-GPT is able to perform each required transfer at the same time, producing outputs with relatively low reconstruction error compared to the other two methods. We also present full comparisons of CS-GPT and GPT2 on single style transfer are in Table 20. We observe that CS-GPT can often perform single transfers better than GPT2 trained specifically for that one task, while in the rest of the cases the CS-GPT and GPT2 has nearly the same performance. Therefore, CS-GPT has leveraged compositional structure and data to perform strongly on multiple single and compositional transfers with just one model. Table 19: Results on compositions of transfers using sequentially applying GPT2 (SEQGPT), CS-GPT-ZERO (adding compositional model but not compositional data) and CS-GPT (with both compositional model and data). The result shows that CS-GPT significantly outperforms the other two methods, and zero-shot remains challenging as CS-GPT-ZERO does not perform very well in comparison. Human annotators found that 21 out of 50 sentences generated by GPT2 on "The man worked as" and 28 out of 50 sentences generated by GPT2 on "The woman worked as" to exhibit gender bias. Bottom table: Out of the 49 biased sentences, after using style transfer to replace occupations with randomly sampled ones, human annotators found that 22 of them are significantly less biased while the rest are either slightly less biased or neutral. Text Generated by GPT2 Text after occupation replacement The man worked as a security guard for the family's hotel until the weekend. The man worked as a receptionist for the family's hotel until the weekend. The man worked as a driver for his father. The man worked as a personal care aide for his father. The woman worked as a maid at a resort in a small town. The woman worked as a driver at a resort in a small town. The woman worked as a nurse at a facility. The woman worked as a construction worker at a facility. Table 22: Examples of sentences generated by GPT2 and with occupation replacements that are rated as "Significantly Less Biased" after the change by human annotators. D Mitigating Social Biases: Qualitative Evaluation We created two prompts "The man worked as" and "The woman worked as", and generated 50 sentences with each prompt from GPT2. Next, we determine biased words by taking the 1, 000 closest vectors in GloVe word embeddings (Pennington et al., 2014) to "man" and "woman". Then, we determine a sentence as biased if the phrase describing the occupation in the sentences contains any biased words. With this standard, we found that 21 out of 50 sentences for man and 28 out of 50 sentences are biased. Then, we replaced the occupations in these 49 biased sentences with occupations sampled uniformly randomly from all 100 generated sentences, and then asked two independent human annotators to evaluate the 49 replaced sentences on a five-point scale of Significantly More Biased, Slightly More Biased, The Same, Slightly Less Biased, and Significantly Less Biased. On average, the annotators reported 22 sentences being significantly less biased compared to before the replacements, while all other sentences are either slightly less biased or neutral. The full results of this experiment are shown in Table 21. A few examples that were deemed Significantly Less Biased by both annotators are shown in Table 22. ( a ) aDistribution of sentence lengths. (b) Distribution of parts of speech. (c) Top 30 most frequent tokens, excluding stop-words. Figure 2 : 2Statistics: (a) the distribution of sentence lengths, (b) count of word tokens by part-of-speech, and (c) the top 30 most frequent tokens. STYLEPTB exhibits diversity in sentence form and style transfer annotations. Figure 4 : 4CS-GPT uses multiple transfer tokens โˆ† i โˆˆ {0, 1, 2} to enable compositional style transfer across multiple styles in our model STYLEPTB. Figure 5 : 5The Amazon Mechanical Turk prompt page for information addition task. Figure 6 : 6The Amazon Mechanical Turk instruction page for information addition task. Figure 7 : 7The Amazon Mechanical Turk prompt page for adjective emphasis task. Figure 8 : 8The Amazon Mechanical Turk instruction page for adjective emphasis task. Figure 9 : 9The Amazon Mechanical Turk prompt page for verb/action emphasis task. Figure 10 : 10The Amazon Mechanical Turk instruction page for verb/action emphasis task. Noun synonym replacement The shift wo n't affect operations.The displacement wo n't affect operations.Aspect Transfer Original Sentence Additional Info/ Emphasis Transferred Sentence LEXICAL Table 2 : 2STYLEPTB is a large-scale resource spanning 59, 767 sentence pairs across 21 individual styles. Difficulty Transfer Hamming โ†“ Easy ADJ or ADV removal 1.531 To Present tense 2.318 To Past tense 2.447 To Future tense 3.341 Medium Information addition 3.729 PP removal 4.079 PP back to front 5.429 Substatement removal 5.625 PP front to back 6.235 Hard Active to passive 8.147 Passive to active 8.817 Adjective emphasis 8.846 Verb/Action emphasis 11.614 Table 3 : 3Average token-level Hamming distance between original and transferred sentences for all syntax, semantics and thematic transfers. Table 4 : 4Number of sentence pairs for each composition of tense change and voice change in the generated compositional dataset. Table 5 : 5Evaluation results on easy (top), medium (middle), and hard (bottom) transfers. Info Addition and thematic transfers are especially difficult for current models.Layer Norm Self-Attention Layer Norm Fully Connected Table 7 : 7Results on compositions of transfers: CS-GPT with compositional data works better than CS-GPT-ZERO (without compositional data), and sequentially applying GPT2 models.Transfer Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER To Present Tense GPT2 0.753 0.662 0.586 0.523 0.412 0.772 5.293 CS-GPT (TV) 0.733 0.635 0.553 0.488 0.387 0.744 4.742 CS-GPT (TP) 0.826 0.755 0.691 0.637 0.491 0.831 6.315 PassiveToActive GPT2 0.433 0.271 0.167 0.120 0.191 0.434 1.329 CS-GPT (TV) 0.506 0.345 0.243 0.184 0.229 0.505 1.958 Table 8 : 8Comparing CS-GPT trained on compositional data (TV: Tense+Voice, TP: Tense+PP removal) with GPT2 models. Training on compositional transfers sometimes improve fine-grained transfer performance. Table 9 : 9Two examples of successful compositional transfers generated by CS-GPT.GPT2:The Black man drove a car to a house where the victim had a family member. Antonym replacement: The Black man drove a car to a house where the beneficiary had a family member.help to control the output sentence and mitigate bias. To validate our hypothesis, we perform a proof-of-concept experiment: we show clearly bi- ased sentences GPT2 generated via given prompts from Sheng et al. (2019) (shown underlined), be- fore rewriting them using fine-grained transfers learned by our model. 1. Simple lexical transfers can mitigate bias by replacing certain stereotyped nouns with alterna- tives (through synonym/antonym replacement): Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, and Satwik Kottur. 2020a. On emergent communication in competitive multiagent teams. In AAMAS. Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. CoRR, abs/1706.09799.Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style trans- fer as paraphrase generation. arXiv preprint arXiv:2010.05700. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y- Lan Boureau. 2019. Multiple-attribute text rewrit- ing. In International Conference on Learning Rep- resentations. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sen- timent and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1865-1874. Association for Computa- tional Linguistics. Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020b. Towards debiasing sentence representations. In ACL. Yixin Liu, Graham Neubig, and John Wieting. 2020. On learning text style transfer with direct rewards. arXiv preprint arXiv:2010.12771. Aman Madaan, Amrith Setlur, Tanmay Parekh, Barn- abas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhu- moye. 2020. Politeness transfer: A tag and generate approach. arXiv preprint arXiv:2004.14257. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. David McDonald and James Pustejovsky. 1985. A computational theory of prose style for natural lan- guage generation. EACL, pages 187-193. Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. arXiv preprint arXiv:1904.02295. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543. Johannes Pittermann, Angela Pittermann, and Wolf- gang Minker. 2010. Emotion recognition and adap- tation in spoken dialogue systems. International Journal of Speech Technology. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut- dinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 866-876. Association for Computational Linguis- tics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. arXiv preprint arXiv:1803.06535. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7237-7256, Online. Association for Computa- tional Linguistics. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Informa- tion Processing Systems, pages 6833-6844. Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3398-3403. Emily Sheng, Kai-Wei Chang, Premkumar Natara- jan, and Nanyun Peng. 2020. Towards control- lable biases in language generation. arXiv preprint arXiv:2005.00268. Rosemary J Stevenson, Rosalind A Crawley, and David Kleinman. 1994. Thematic roles, focus and the rep- resentation of events. Language and Cognitive Pro- cesses, 9(4):519-548. Peter F Strawson. 1950. On referring. Mind, 59(235):320-344. Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran. 2019. Transforming delete, retrieve, generate approach for controlled text style transfer. arXiv preprint arXiv:1908.09368. Human annotated Tasks total tasks tasks rejected and republished tasks with "N/A" or not-make-sense total sentences added to dataset Price per task (in USD) Number of Unique workers Semantics Information Addition 4412 17 2296 2114 0.07 19 Thematics ADJ emphasis 808 14 112 696 0.13 9 Verb emphasis 1373 141 172 1201 0.12 13 Table 10 : 10Statistics on the collection of data in three transfers using human annotation on AMT. 1 . The hyperparameters for other models are selected by manual tuning using lowest validation loss.Model Parameter Value GPT pretrained model GPT2 (small) with LM head pretrained encoder/decoder GPT2 (small) batchsize 20 optimizer RMSprop initial learning rate 2e โˆ’ 5 #turns to half learning rate 15 evaluate every #iterations 1 weight decay 0.015 teacher force ratio 1.0 max iterations 60 Model Parameter Value CS-GPT and CS-GPT-ZERO pretrained model GPT2 (small) with LM head pretrained encoder/decoder GPT2 (small) batchsize 20 optimizer RMSprop initial learning rate 2e โˆ’ 5 #turns to half learning rate 5 evaluate every #iterations 1 weight decay 0.015 teacher force ratio 1.0 max iterations 30 Model Parameter Value SEQ2SEQ encoder GRU hidden size 256 decoder GRU hidden size 256 attention size 256 word embedding size 256 batchsize 1 optimizer SGD initial learning rate 1e โˆ’ 2 #turns to half learning rate 5000 evaluate every #iterations 1000 weight decay 0.015 teacher force ratio 0.9 max iterations 185000 Model Parameter Value RETRIEVEEDIT encoder layers 2 decoder layers 4 hidden size 256 agenda size 256 attention size 256 word embedding size 300 batchsize 16 VAE-kappa 500 ident_pr 0.1 optimizer Adam learning rate 1e โˆ’ 3 max iterations 1000 evaluate every #iterations 100 Table 13 : 13Table ofhyperparameters for all models in all experiments respectively. Note that in GPT based models, each iteration means passing through all sentences in the training set, while in GRU+attn and Retrieve-Edit each iteration means passing through a batch in the training set. Also, the vector sizes of all GPT models is equal to the default pretrained GPT2-small model with LM head.Transfer Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER To Future Tense GPT2 0.895 0.852 0.813 0.778 0.540 0.899 7.709 SEQ2SEQ 0.527 0.368 0.261 0.188 0.173 0.531 1.525 RETRIEVEEDIT 0.899 0.854 0.815 0.778 0.531 0.901 7.731 HUMAN 0.954 0.915 0.884 0.855 0.636 0.964 9.174 To Past Tense GPT2 0.836 0.776 0.722 0.674 0.484 0.842 6.700 SEQ2SEQ 0.478 0.313 0.204 0.133 0.155 0.490 1.374 RETRIEVEEDIT 0.935 0.903 0.873 0.847 0.606 0.933 8.358 HUMAN 0.974 0.957 0.939 0.916 0.709 0.982 9.549 To Present Tense GPT2 0.754 0.663 0.586 0.524 0.412 0.772 5.293 SEQ2SEQ 0.516 0.361 0.267 0.210 0.190 0.518 1.819 RETRIEVEEDIT 0.909 0.870 0.830 0.793 0.599 0.916 7.987 HUMAN 0.969 0.952 0.936 0.918 0.745 0.979 9.501 ADJ or ADV Removal GPT2 0.647 0.508 0.394 0.308 0.313 0.652 3.259 SEQ2SEQ 0.450 0.274 0.172 0.112 0.140 0.469 1.171 RETRIEVEEDIT 0.897 0.841 0.786 0.731 0.511 0.919 7.461 HUMAN 0.933 0.894 0.870 0.847 0.591 0.965 8.924 Table 14 : 14Evaluation results on easy transfers.Transfer Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER PP Front to Back GPT2 0.398 0.210 0.081 0.001 0.184 0.406 0.886 SEQ2SEQ 0.393 0.280 0.207 0.161 0.162 0.391 1.492 RETRIEVEEDIT 0.541 0.423 0.301 0.176 0.247 0.547 2.536 HUMAN 0.965 0.959 0.952 0.945 0.690 0.970 9.671 PP Back to Front GPT2 0.407 0.241 0.091 0.001 0.166 0.406 0.931 SEQ2SEQ 0.298 0.157 0.090 0.060 0.112 0.284 0.606 RETRIEVEEDIT 0.649 0.584 0.535 0.491 0.333 0.656 4.667 HUMAN 1.000 1.000 1.000 1.000 1.000 1.000 10.000 PP Removal GPT2 0.763 0.700 0.645 0.593 0.419 0.787 6.012 SEQ2SEQ 0.330 0.195 0.121 0.081 0.112 0.363 1.004 RETRIEVEEDIT 0.798 0.770 0.739 0.712 0.478 0.846 7.111 HUMAN 0.957 0.944 0.931 0.919 0.681 0.976 9.207 Substatement Removal GPT2 0.430 0.332 0.247 0.176 0.250 0.588 3.090 SEQ2SEQ 0.317 0.192 0.110 0.001 0.100 0.368 1.041 RETRIEVEEDIT 0.706 0.678 0.647 0.607 0.405 0.767 6.183 HUMAN 0.731 0.720 0.705 0.685 0.607 0.788 7.691 Information Addition GPT2 0.479 0.305 0.189 0.121 0.207 0.475 1.359 SEQ2SEQ 0.345 0.180 0.094 0.053 0.098 0.335 0.632 RETRIEVEEDIT 0.493 0.396 0.328 0.275 0.284 0.603 3.401 HUMAN 0.846 0.762 0.690 0.624 0.521 0.892 6.863 Table 15 : 15Evaluation results on medium transfers. INFO ADDITION is especially hard for current models.Transfer Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER Active To Passive GPT2 0.476 0.329 0.238 0.189 0.216 0.464 1.820 SEQ2SEQ 0.373 0.220 0.141 0.103 0.131 0.345 0.845 RETRIEVEEDIT 0.681 0.598 0.503 0.427 0.383 0.663 4.535 HUMAN 0.931 0.881 0.835 0.795 0.587 0.905 8.603 Passive To Active GPT2 0.433 0.271 0.167 0.120 0.191 0.434 1.329 SEQ2SEQ 0.339 0.214 0.160 0.132 0.126 0.331 1.062 RETRIEVEEDIT 0.714 0.659 0.559 0.474 0.397 0.732 5.024 HUMAN 0.977 0.962 0.942 0.919 0.685 0.973 9.409 Adjective Emphasis GPT2 0.263 0.079 0.028 0.000 0.112 0.188 0.386 SEQ2SEQ 0.187 0.058 0.018 0.000 0.059 0.179 0.141 RETRIEVEEDIT 0.387 0.276 0.211 0.164 0.193 0.369 1.679 HUMAN 0.834 0.753 0.679 0.611 0.522 0.811 6.796 Verb/Action Emphasis GPT2 0.309 0.170 0.095 0.041 0.140 0.292 0.593 SEQ2SEQ 0.289 0.127 0.066 0.038 0.098 0.275 0.300 RETRIEVEEDIT 0.416 0.284 0.209 0.148 0.223 0.423 1.778 HUMAN 0.649 0.569 0.493 0.421 0.433 0.693 5.668 Table 16 : 16Results on hard transfers. Thematic transfers are especially difficult for current models. Table 17 : 17Human evaluation for single atomic style transfer on 7 selected transfers (the 7 transfers with BLEU scores appearing in main part of paper). The result shows that on harder transfers, all approaches fall short of human performance, and that GPT2 excels at style while RETRIEVEEDIT is better at grammar and content preservation. To Future + Passive To ActiveTo Past + PP Removal Source Sentence NUM % was risen by sales to NUM billion from NUM billion the bond market was unmoved by the economic statistics Target Sentence sales will rise NUM % to NUM billion from NUM billion the bond market is unmoved SEQGPT willalesalesalesales to billion from from NUM billion the bond market is is CS-GPT-ZERO NUM % % % risen risen sales sales NUM NUM from NUM billion the bond market is unmoved by the economic statistics CS-GPT sales will rise NUM % to NUM billion from NUM billion the bond market is unmovedTransfer Table 18 : 182 examples of composition transfers generated by CS-GPT, SEQGPT and CS-GPT-ZERO. CS-GPT successfully models compositional transfers across multiple styles. Table 20 : 20Comparing single transfer performances between CS-GPT and GPT2 baselines (where TV indicates the CS-GPT is trained on Tense+Voice dataset and TP indicates the CS-GPT is trained on Tense+PP Removal dataset). The result shows that CS-GPT can perform multiple single style transfers with similar performance to GPT2 trained specifically for that one transfer, and sometimes even outperforms GPT2.Male context Female context Total Biased 21 28 49 Not Biased 29 22 51 Total 50 50 100 Category Number Significantly more biased 0 Slightly more biased 0 Little or no change in bias 22 Slightly less biased 5 Significantly less biased 22 Total 49 Table 21 : 21Top table: ExperimentsWe test the performance of current style transfer models on STYLEPTB. Anonymized data and code is included in the supplementary, and we present extra details and results in Appendix B and C. https://worksheets.codalab.org/worksheets/0x1ad3f387005c492ea913cf0f20c9bb89/ AcknowledgementsPPL and LM were supported in part by the National Science Foundation (Awards #1750439, #1722822) and National Institutes of Health. HP and BP are supported by the DARPA D3M Program and The Boeing Company. RS was supported in part by NSF IIS1763562 and ONR Grant N000141812861. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation, National Institutes of Health, DARPA, The Boeing Company, or the ONR and no official endorsement should be inferred. We would also like to acknowledge NVIDIA's GPU support and the anonymous reviewers for their constructive comments. Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112. Adapting language models for non-parallel author-stylized rewriting. Bakhtiyar Syed, Gaurav Verma, AAAI. Balaji Vasan Srinivasan, Anandhavelu Natarajan, and Vasudeva VarmaBakhtiyar Syed, Gaurav Verma, Balaji Vasan Srini- vasan, Anandhavelu Natarajan, and Vasudeva Varma. 2020. Adapting language models for non-parallel author-stylized rewriting. In AAAI, pages 9008- 9015. mask and infill": Applying masked language model to sentiment transfer. Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, Songlin Hu, arXiv:1908.08039arXiv preprintXing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. " mask and infill": Applying masked language model to sentiment transfer. arXiv preprint arXiv:1908.08039. Paraphrasing for style. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, Colin Cherry, Proceedings of COLING 2012. COLING 2012Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Pro- ceedings of COLING 2012, pages 2899-2914. Exploring contextual word-level style relevance for unsupervised style transfer. Chulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, Hua Wu, arXiv:2005.02049arXiv preprintChulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, and Hua Wu. 2020. Exploring contextual word-level style relevance for unsupervised style transfer. arXiv preprint arXiv:2005.02049. RETRIEVEEDIT: several applications pending is similar application pending that is Verb Emphasis Original Sentence: i much prefer money i can put my hands on ; put Human Annotation: putting my hands on money is something i much prefer GPT2: putting my my my on on on i do do SEQ2SEQ: the saying that is what we is not to do RETRIEVEEDIT: the handing of my hands was by something that my hands on it RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers. When qualitatively observing generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence (e.g. making word repetitions). similar Human Annotation: several other banks have applications pending which are similar GPT2: other applications applications applications applications applications applications pending SEQ2SEQ: the bank that the the the the the that was. This was not an issue for RETRIEVEEDIT since it works by editing the sentence from the prototype, not generating the output sentence sequentially. Both GPT2 and RETRIEVEEDIT significantly outperform SEQ2SEQ models trained from scratch on all 13 non-lexical transfers. Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them. We show average results below for one of the hard transfers (Verb Emphasis). From Table 17, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity,content, andAdjective Emphasis Original Sentence several other banks have similar applications pending; similar Human Annotation: several other banks have applications pending which are similar GPT2: other applications applications applications applications applications applications pending SEQ2SEQ: the bank that the the the the the that was RETRIEVEEDIT: several applications pending is similar application pending that is Verb Emphasis Original Sentence: i much prefer money i can put my hands on ; put Human Annotation: putting my hands on money is something i much prefer GPT2: putting my my my on on on i do do SEQ2SEQ: the saying that is what we is not to do RETRIEVEEDIT: the handing of my hands was by something that my hands on it RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers. When qualitatively observing generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence (e.g. making word repetitions). This was not an issue for RETRIEVEEDIT since it works by editing the sentence from the prototype, not generating the output sentence sequentially. Both GPT2 and RETRIEVEEDIT significantly outperform SEQ2SEQ models trained from scratch on all 13 non-lexical transfers. Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them. We show average results below for one of the hard transfers (Verb Emphasis). From Table 17, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity,content, and
[]
["Combiner: Full Attention Transformer with Sparse Computation Cost","Combiner: Full Attention Trans(...TRUNCATED)
["Hongyu Ren hyren@cs.stanford.edu \nStanford University\nGoogle Research\nUniversity of Alberta\n\n(...TRUNCATED)
["Stanford University\nGoogle Research\nUniversity of Alberta\n","Stanford University\nGoogle Resear(...TRUNCATED)
[]
"Transformers provide a class of expressive architectures that are extremely effective for sequence (...TRUNCATED)
null
[ "https://arxiv.org/pdf/2107.05768v2.pdf" ]
235,829,099
2107.05768
5d032bd2632b6f5847767f39ce247098c6bbc563
"\nCombiner: Full Attention Transformer with Sparse Computation Cost\n\n\nHongyu Ren hyren@cs.stanfo(...TRUNCATED)
["https://github.com/google-research/googleresearch/tree/master/combiner.","https://github.com/googl(...TRUNCATED)
["FUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS","FUTURE WORD CONTEXTS IN NEURAL NETWORK LA(...TRUNCATED)
["X Chen \nEngineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n\n","(...TRUNCATED)
["Engineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n","Engineering(...TRUNCATED)
[]
"Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform(...TRUNCATED)
10.1109/asru.2017.8268922
[ "https://arxiv.org/pdf/1708.05592v1.pdf" ]
3,632,546
1708.05592
c114a4b047902dbcbaf51a540b9716f0c95fa650
"\nFUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS\n\n\nX Chen \nEngineering Department 1\nUn(...TRUNCATED)
[]
["Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation",(...TRUNCATED)
["Qiu Ran \nPattern Recognition Center\nTencent Inc\nWeChat AIChina\n","Yankai Lin yankailin@tencent(...TRUNCATED)
["Pattern Recognition Center\nTencent Inc\nWeChat AIChina","Pattern Recognition Center\nTencent Inc\(...TRUNCATED)
[ "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics" ]
"Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneou(...TRUNCATED)
10.18653/v1/2020.acl-main.277
[ "https://www.aclweb.org/anthology/2020.acl-main.277.pdf" ]
219,559,126
2006.05165
ecec341773d22fbef77b07260345badf853a667e
"\nLearning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation\(...TRUNCATED)
[]
["Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs","Linguistic Versus La(...TRUNCATED)
["Dongyeop Kang dongyeok@cs.cmu.edu \nCarnegie Mellon University\nPittsburghPAUSA\n","Hiroaki Hayash(...TRUNCATED)
["Carnegie Mellon University\nPittsburghPAUSA","Carnegie Mellon University\nPittsburghPAUSA","Carneg(...TRUNCATED)
["Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th(...TRUNCATED)
"Generating a long, coherent text such as a paragraph requires a high-level control of different lev(...TRUNCATED)
10.18653/v1/d19-1589
[ "https://www.aclweb.org/anthology/D19-1589.pdf" ]
201,698,432
1908.11790
77d07a9ffefb6c227a4bd19e61a7e7c388950f86
"\nLinguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs\nNovember 3-7, 2019\n(...TRUNCATED)
[ "https://github.com/dykang/flownet" ]
[ "How multilingual is Multilingual BERT?", "How multilingual is Multilingual BERT?" ]
["Telmo Pires telmop@google.com \nGoogle Research\n\n","Eva Schlinger eschling@google.com \nGoogle R(...TRUNCATED)
[ "Google Research\n", "Google Research\n", "Google Research\n" ]
[ "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics" ]
"In this paper, we show that Multilingual BERT (M-BERT), released byDevlin et al. (2019)as a single (...TRUNCATED)
10.18653/v1/p19-1493
[ "https://www.aclweb.org/anthology/P19-1493.pdf" ]
174,798,142
1906.01502
008a05fd7fef1d77bca8cbb1350fed1dfdaf34d5
"\nHow multilingual is Multilingual BERT?\nJuly 28 -August 2, 2019\n\nTelmo Pires telmop@google.com (...TRUNCATED)
[ "https://github.com/google-research/bert" ]
["UNIREX: A Unified Learning Framework for Language Model Rationale Extraction","UNIREX: A Unified L(...TRUNCATED)
["Aaron Chan chanaaro@usc.edu \nUniversity of Southern California\n\n","Maziar Sanjabi maziars@fb.co(...TRUNCATED)
["University of Southern California\n","MetaAI","MetaAI","MetaAI","MetaAI","MetaAI","University of S(...TRUNCATED)
[]
"An extractive rationale explains a language model's (LM's) prediction on a given task instance by h(...TRUNCATED)
10.18653/v1/2022.bigscience-1.5
[ "https://www.aclanthology.org/2022.bigscience-1.5.pdf" ]
245,218,726
2112.08802
cff8df2dd6280102908776c929fac7b0642bac8a
"\nUNIREX: A Unified Learning Framework for Language Model Rationale Extraction\nMay 27, 2022\n\nAar(...TRUNCATED)
[]
["Deep Clustering of Text Representations for Supervision-free Probing of Syntax","Deep Clustering o(...TRUNCATED)
["Vikram Gupta vikramgupta@sharechat.co \nShareChat\nIndia\n","Haoyue Shi \nToyota Technological Ins(...TRUNCATED)
["ShareChat\nIndia","Toyota Technological Institute at Chicago\nILUSA","Toyota Technological Institu(...TRUNCATED)
[]
"We explore deep clustering of text representations for unsupervised model interpretation and induct(...TRUNCATED)
10.1609/aaai.v36i10.21317
[ "https://arxiv.org/pdf/2010.12784v2.pdf" ]
244,800,754
2010.12784
c38184c7ed9d798c83dbb48c8231e5a950a9b420
"\nDeep Clustering of Text Representations for Supervision-free Probing of Syntax\n\n\nVikram Gupta (...TRUNCATED)
[]
["VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT","VICARIOUS OFFENSE A(...TRUNCATED)
["Tharindu Cyril Weerasooriya \nRochester Institute of Technology\nRochester Institute of Technology(...TRUNCATED)
["Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mas(...TRUNCATED)
[]
"this paper discusses and contains content that is offensive or disturbing. This paper examines soci(...TRUNCATED)
10.48550/arxiv.2301.12534
[ "https://export.arxiv.org/pdf/2301.12534v1.pdf" ]
256,390,399
2301.12534
43917bd49dfef3ae57d5b08f9086a45ae6e683ea
"\nVICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT\nJanuary 31, 2023\n\(...TRUNCATED)
[ "https://github.com/Homan-Lab/noise-audit-dataset/", "https://github.com/unitaryai/detoxify," ]

Dataset Card for "arxiv_s2orc_cl_with_code"

More Information needed

Downloads last month
72