{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:03.588190Z" }, "title": "Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs", "authors": [ { "first": "Monisha", "middle": [], "last": "Jegadeesan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sachin", "middle": [], "last": "Kumar", "suffix": "", "affiliation": {}, "email": "sachink@cs.cmu.edu" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "", "affiliation": {}, "email": "jwieting@cs.cmu.edu" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "", "affiliation": {}, "email": "yuliats@cs.washington.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a novel technique for zero-shot paraphrase generation. The key contribution is an end-to-end multilingual paraphrasing model that is trained using translated parallel corpora to generate paraphrases into \"meaning spaces\"-replacing the final softmax layer with word embeddings. This architectural modification, plus a training procedure that incorporates an autoencoding objective, enables effective parameter sharing across languages for more fluent monolingual rewriting, and facilitates fluency and diversity in generation. Our continuous-output paraphrase generation models outperform zero-shot paraphrasing baselines, when evaluated on two languages using a battery of computational metrics as well as in human assessment. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present a novel technique for zero-shot paraphrase generation. The key contribution is an end-to-end multilingual paraphrasing model that is trained using translated parallel corpora to generate paraphrases into \"meaning spaces\"-replacing the final softmax layer with word embeddings. This architectural modification, plus a training procedure that incorporates an autoencoding objective, enables effective parameter sharing across languages for more fluent monolingual rewriting, and facilitates fluency and diversity in generation. Our continuous-output paraphrase generation models outperform zero-shot paraphrasing baselines, when evaluated on two languages using a battery of computational metrics as well as in human assessment. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Paraphrasing aims to rewrite text while preserving its meaning and achieving a different surface realization. It is an eminently practical task, useful in educational applications (Inui et al., 2003; Petersen and Ostendorf, 2007; Xu et al., 2016) , information retrieval (Duboue and Chu-Carroll, 2006; Harabagiu and Hickl, 2006; Fader et al., 2014) , in dialogue systems (Yan et al., 2016) , as well as for data augmentation in a plethora of other tasks (Berant and Liang, 2014; Romano et al., 2006; Fadaee et al., 2017; Jin et al., 2018; Hou et al., 2018) .", "cite_spans": [ { "start": 180, "end": 199, "text": "(Inui et al., 2003;", "ref_id": "BIBREF17" }, { "start": 200, "end": 229, "text": "Petersen and Ostendorf, 2007;", "ref_id": "BIBREF37" }, { "start": 230, "end": 246, "text": "Xu et al., 2016)", "ref_id": "BIBREF50" }, { "start": 271, "end": 301, "text": "(Duboue and Chu-Carroll, 2006;", "ref_id": "BIBREF7" }, { "start": 302, "end": 328, "text": "Harabagiu and Hickl, 2006;", "ref_id": "BIBREF13" }, { "start": 329, "end": 348, "text": "Fader et al., 2014)", "ref_id": "BIBREF9" }, { "start": 371, "end": 389, "text": "(Yan et al., 2016)", "ref_id": "BIBREF51" }, { "start": 454, "end": 478, "text": "(Berant and Liang, 2014;", "ref_id": "BIBREF3" }, { "start": 479, "end": 499, "text": "Romano et al., 2006;", "ref_id": "BIBREF39" }, { "start": 500, "end": 520, "text": "Fadaee et al., 2017;", "ref_id": "BIBREF8" }, { "start": 521, "end": 538, "text": "Jin et al., 2018;", "ref_id": "BIBREF19" }, { "start": 539, "end": 556, "text": "Hou et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generating diverse and coherent paraphrases is a difficult task. Unlike in machine translation, where naturally occurring parallel data in the form of translated news, books and talks are available in abundance on the web, naturally occurring paraphrase corpora are scarce. Most common approaches to paraphrasing are based on translation, in the form of bilingual pivoting (Mallinson et al., 2017a,b) or back-translation Hu et al., 2019a,b) . This stems from the hypothesis that if two sentences in a language (e.g. English) have the same translation in another, (e.g. French) they must be paraphrases of each other. While these pipeline approaches bypass the problem of missing data, they propagate errors. Further, all neural paraphrasing models (e.g., Prakash et al., 2016; Gupta et al., 2018; predict discrete tokens through a final softmax layer. We hypothesize that softmax-based architectures restrict the diversity of outputs, biasing the models to copy words and phrases from the input, which has an effect opposite to the intended one in paraphrasing.", "cite_spans": [ { "start": 373, "end": 400, "text": "(Mallinson et al., 2017a,b)", "ref_id": null }, { "start": 421, "end": 440, "text": "Hu et al., 2019a,b)", "ref_id": null }, { "start": 755, "end": 776, "text": "Prakash et al., 2016;", "ref_id": "BIBREF38" }, { "start": 777, "end": 796, "text": "Gupta et al., 2018;", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we introduce PARAvMF -a simple and effective method of training paraphrasing models by generating into embedding spaces ( \u00a72). Since parallel paraphrasing data is not available even in otherwise high-resource languages like French, we focus on an unsupervised approach. Using bilingual parallel corpora, we adapt multilingual machine translation (Johnson et al., 2017) to monolingual translation. We propose to train this model with translation and autoencoding objectives. The latter helps simplify the training setup by using only one language pair, whereas prior work required multiple language pairs and more data to stabilize training (Tiedemann and Scherrer, 2019; Buck et al., 2018; Guo et al., 2019; Thompson and Post, 2020) . To encourage diversity, we propose to replace the final softmax layer in the decoder with a layer that learns to predict word vectors (Kumar and Tsvetkov, 2019) . We show that predicting into word meaning representations increases diversity in paraphrasing by generating semantically similar words and phrases which are often neighbors in the embedding space.", "cite_spans": [ { "start": 360, "end": 382, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF20" }, { "start": 654, "end": 684, "text": "(Tiedemann and Scherrer, 2019;", "ref_id": "BIBREF43" }, { "start": 685, "end": 703, "text": "Buck et al., 2018;", "ref_id": "BIBREF5" }, { "start": 704, "end": 721, "text": "Guo et al., 2019;", "ref_id": "BIBREF11" }, { "start": 722, "end": 746, "text": "Thompson and Post, 2020)", "ref_id": "BIBREF41" }, { "start": 883, "end": 909, "text": "(Kumar and Tsvetkov, 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our proposed model on paraphras-ing English and French sentences ( \u00a73). In several setups, standard automatic metrics and human judgment experiments show that our zero-shot paraphrasing model with embedding outputs generates more diverse and fluent paraphrases, compared to state-of-the-art methods ( \u00a74).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let the language to paraphrase in be L 1 . Our goal is to learn a mapping f (x; \u03b8) parameterized by \u03b8. f takes a text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PARAvMF Model", "sec_num": "2" }, { "text": "x = (x 1 , x 2 , \u2022 \u2022 \u2022 , x m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PARAvMF Model", "sec_num": "2" }, { "text": "containing m words as input, which can be a sentence or a segment in L 1 . It then generates y = (y 1 , y 2 , . . . , y n ) of length n in the same language such that x and y are paraphrases. That is, y represents the same meaning as x using different phrasing. We assume that no direct supervision data is available, but there exists a bilingual parallel corpus between L 1 and another language L 2 . We are also given pre-trained embeddings (Bojanowski et al., 2017) for words in both L 1 and L 2 . The dimension of both the embedding spaces is d.", "cite_spans": [ { "start": 443, "end": 468, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The PARAvMF Model", "sec_num": "2" }, { "text": "We use a standard transformer-based encoderdecoder model (Vaswani et al., 2017) as the underlying architecture for f . As visualized in the system diagram presented in the Appendix, f is jointly trained to perform three tasks with a shared encoder and decoder: (1) translation from L 1 to L 2 , (2) translation from L 2 to L 1 and (3) reconstructing the input text in L 1 (autoencoding). 2 Towards our primary goal of meaning preservation, the translation objectives help the encoder map the inputs in both the languages to a common semantic space, whereas the decoder learns to generate language-specific outputs. On the other hand, with the autoencoding objective, we expose the model to examples where the input and output are in the same language, biasing the model to adhere to the start token supplied to it and decode monolingually. Using this training algorithm, we find in our experiments ( \u00a74), that the resulting paraphrases albeit meaning-preserving still lack in diversity. We identify two reasons for this issue. First, the model overfits to the autoencoding objective and just learns to copy the input sentences. We address this issue by using only a small random sample of the total training sentences for training with this objective. 3 Second, we find that cross-entropy loss used to train the model results in peaky distributions at each decoding step where the target words get most of the probability mass. This distribution being another signal of overfitting also reduces diversity (Meister et al., 2020) . We find in our preliminary experiments, that prior work to address this issue by augmenting diversity inducing objectives to the training loss (Vijayakumar et al., 2018) often comes at a cost of reducing meaning preservation. In this work, we propose using a different training loss which naturally promotes output diversity. We follow Kumar and Tsvetkov (2019) , and instead of treating each word w in the vocabulary as a discrete unit, we represent it using a unit-normalized pre-trained vector e learned using monolingual corpora (Bojanowski et al., 2017) . At each decoding step, instead of predicting a probability distribution over the vocabulary using a softmax layer, we predict a d-dimensional continuous-valued vector\u00ea. We train our proposed model by minimizing von Mises-Fisher (vMF) loss-a probabilitistic variant of cosine distance-between the predicted vector and the pre-trained vector. At each step of decoding, the output word is generated by finding the closest neighbor (using cosine similarity) of the predicted output vector\u00ea in the pre-trained embedding table. Since this loss does not directly optimize for a specific token but for a vector subspace which contains many similar meaning words, we observe that it has a higher tendency to generate diverse outputs than softmax-based models, both at the lexical and syntactic level as we show in our experiments.", "cite_spans": [ { "start": 57, "end": 79, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF44" }, { "start": 388, "end": 389, "text": "2", "ref_id": null }, { "start": 1252, "end": 1253, "text": "3", "ref_id": null }, { "start": 1505, "end": 1527, "text": "(Meister et al., 2020)", "ref_id": "BIBREF30" }, { "start": 1673, "end": 1699, "text": "(Vijayakumar et al., 2018)", "ref_id": "BIBREF45" }, { "start": 1866, "end": 1891, "text": "Kumar and Tsvetkov (2019)", "ref_id": "BIBREF24" }, { "start": 2063, "end": 2088, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The PARAvMF Model", "sec_num": "2" }, { "text": "Overall, the contribution of this work is twofold: (1) a translation and autoencoding based training objective to enable paraphrasing while preserving meaning without any parallel paraphrasing data, and (2) optimizing for vector subspaces instead of token probabilities to induce diversity of outputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PARAvMF Model", "sec_num": "2" }, { "text": "Datasets We evaluate paraphrasing in two languages: English and French. IWSLT'16 En\u2194Fr corpus (Cettolo et al., 2016) with \u223c220K sentence pairs is used for training with translation objective, and 4450 sentences, randomly sampled \u223c1% of the training data in L 1 (either En or Fr), for au-toencoding. We use the L 1 side of the IWSLT'16 dev set for early stopping with the autoencoding objective. We use IWSLT'16 test set for automatic evaluation consisting of 2331 samples in En and Fr each. For human evaluation we subsample 200 sentences from this set. We tokenize and truecase all the data using Moses preprocessing scripts (Koehn et al., 2007) . We conduct additional experiments with a larger En-Fr corpus constructed using a 2M sentence-pair subset of the combination of the WMT'10 Gigaword (Tiedemann, 2012) and the OpenSubtitles corpora (Lison and Tiedemann, 2016) .", "cite_spans": [ { "start": 94, "end": 116, "text": "(Cettolo et al., 2016)", "ref_id": "BIBREF6" }, { "start": 626, "end": 646, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF23" }, { "start": 796, "end": 813, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF42" }, { "start": 844, "end": 871, "text": "(Lison and Tiedemann, 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Implementation We modify the standard seq2seq transformer model in OpenNMT (Klein et al., 2017) to generate word embeddings (Kumar and Tsvetkov, 2019) , and train it with the vMF loss with respect to target vectors. We initialize and fix the input embeddings of the encoder and decoder with off-the-shelf (sub-word based) fasttext embeddings (Bojanowski et al., 2017) for both En and Fr and align the embeddings to encourage crosslingual sharing (Artetxe et al., 2018) . With a vocabulary size of 50K for each language, the combined vocabulary size of the encoder and the decoder is 100K. Both encoder and decoder consist of 6 layers with 4 attention heads. The model is optimized using Adam (Kingma and Ba, 2015), with batch size 4K, and 0.3 dropout. The hidden dimension size is 1024, the dimension of the embedding layers is 512. We add a linear layer to transform 300-dimensional pre-trained embeddings to 512-dimensional input vectors to the model. After decoding, we postprocess the generated output to replace words from L 2 by a look-up in the dictionary induced from the aligned embedding spaces.", "cite_spans": [ { "start": 75, "end": 95, "text": "(Klein et al., 2017)", "ref_id": "BIBREF22" }, { "start": 124, "end": 150, "text": "(Kumar and Tsvetkov, 2019)", "ref_id": "BIBREF24" }, { "start": 342, "end": 367, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" }, { "start": 446, "end": 468, "text": "(Artetxe et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Baselines Although unsupervised methods of paraphrasing with only monolingual data have been explored in recent works (Gupta et al., 2018; Yang et al., 2019; Roy and Grangier, 2019; Patro et al., 2018; Park et al., 2019) they have not been shown to outperform translation based baselines (West et al., 2020 ). Hence we compare our proposed approach with translation-based baselines only. First, we compare with bilingual pivoting baselines (Mallinson et al., 2017a,b) which pipeline two separate translation models, L 1 \u2192 L 2 , and L 2 \u2192 L 1 . We use two bilingual pivoting baselines, one based on continuous-output model (BP-VMF; the output vectors of the first model are first converted to discrete tokens before being fed to the next) and another based on softmax-based model (BP-CE).", "cite_spans": [ { "start": 118, "end": 138, "text": "(Gupta et al., 2018;", "ref_id": "BIBREF12" }, { "start": 139, "end": 157, "text": "Yang et al., 2019;", "ref_id": "BIBREF52" }, { "start": 158, "end": 181, "text": "Roy and Grangier, 2019;", "ref_id": "BIBREF40" }, { "start": 182, "end": 201, "text": "Patro et al., 2018;", "ref_id": "BIBREF33" }, { "start": 202, "end": 220, "text": "Park et al., 2019)", "ref_id": null }, { "start": 288, "end": 306, "text": "(West et al., 2020", "ref_id": "BIBREF47" }, { "start": 440, "end": 467, "text": "(Mallinson et al., 2017a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "To evaluate the impact of embedding outputs, we also compare our proposed model PARAVMF to softmax-based baseline PARACE, leaving other model components unchanged. PARACE is a modified bilingual version of the multilingual method proposed in Guo et al. (2019) , the current state-ofthe-art in zero-shot paraphrasing. Evaluation setup There are many ways to paraphrase a sentence, but no manually crafted multireference paraphrase datasets exist, that could be used as test sets (and there are no datasets in languages other than English). We thus evaluate the generated paraphrases on semantic similarity and lexical diversity compared to the input text. Following prior work, we use the n-gram based metric METEOR (Banerjee and Lavie, 2005) . Despite accounting for synonyms, it is not well-suited to evaluate paraphrases, since it typically assigns lower scores to novel phrasings, due to incomplete synonym dictionaries. We thus also include BERTScore (Zhang et al., 2020), computing cosine similarity between the contextual embeddings of two sentences. Naturally, just copying the inputs can also lead to high scores in these metrics. To evaluate lexical diversity, we follow Hu et al. (2019b) and include IoU -Intersection over Union (also called Jaccard Index) and Word Error Rate (WER). To measure structural diversity we use (constituency) Parse Tree Edit distance (PTED). 4 Note that model outputs that do not preserve meaning in paraphrasing (and generate totally different sentences) will also obtain high diversity scores, but these are not indicative of quality paraphrasing but will falsely contribute to high diversity scores if averaged across the entire test set. We thus measure the diversity only on subsets of the test set for which the strongest baseline (PARACE) and our model generate meaning-preserving paraphrases measured using BERTScore thresholds. We report the diversity scores for three such thresholds: 0.95, 0.9, 0.85, selected empirically such that the sample size is sufficiently large. preservation. Both pivoting based baselines perform poorly on average. This is a consequence of error propagation exacerbated in BP-VMF 5 . As a result, a very small fraction of generated sentences show meaning preservation (as measured by achieving a BERTScore greater than 0.85). Hence, we only compare the diversity in the two best meaningpreserving models, PARACE and PARAVMF. As shown in table 2, across all thresholds the latter model achieves higher lexical and syntactic diversity in the outputs. Ablation results in the Appendix show that both the autoencoding objective and the final embedding layer contribute to the improved quality of paraphrases. An additional benefit of our proposed model is that by replacing the softmax layer with word embeddings, PARAvMF is trained 3x faster than the PARACE baseline. We further conduct a manual evaluation which quantifies the rate at which annotators find paraphrases fluent, consistent with input meaning, and novel in phrasing. In an A/B testing setup, we compare our proposed approach with the strongest baseline PARACE. 6 200 sentences sampled from the IWSLT English test were scored by two annotators independently, which yielded the inter-annotator agreement of 0.37 (fair agreement). Out of the sentences on which both annotators agree (142 out of 200), we find that PARAvMF model outperforms the PARACE model in 73% of votes. We show more details and some examples of PARAvMF and PARACE system outputs in the Appendix.", "cite_spans": [ { "start": 242, "end": 259, "text": "Guo et al. (2019)", "ref_id": "BIBREF11" }, { "start": 715, "end": 741, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF1" }, { "start": 1180, "end": 1197, "text": "Hu et al. (2019b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Finally, we also evaluate that our results hold on a larger dataset in different domain. We retrain PARAvMF and PARACE on 2M En-Fr corpus described in \u00a73. 7 The results of automatic evaluation are presented in the Appendix. We conduct human evaluation on a sample of 200 sentences from this test set following the same A/B testing procedure as described above, with each sample rated by three annotators, resulting in a pairwise-average kappa agreement index of 0.21. 8 42.9% PARAvMF outputs were selected as better paraphrases, compared to 24.5% outputs from PARACE, supporting our main results on the IWSLT dataset.", "cite_spans": [ { "start": 155, "end": 156, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Bilingual pivoting is a common technique used with bilingual data (Barzilay and McKeown, 2001; Ganitkevitch et al., 2013; Pavlick et al., 2015; Mallinson et al., 2017a) . PARANMT ) is a large psuedo-parallel paraphrase corpus constructed through back-translation (Wieting et al., 2017) . Iyyer et al. (2018) augment it with syntactic constraints for controlled paraphrasing; PARABANK (Hu et al., 2019a) improves upon PARANMT via lexical constraining of decoding; and PARABANK 2 (Hu et al., 2019b) improves the diversity of paraphrases in PARABANK through a clustering-based approach. Note that these works are focused on English. Here, we propose a language-independent approach relying only on abundant bilingual data. Our approach is most similar to Guo et al. (2019) who use bilingual and multilingual translation for zero-shot paraphrasing. They, however, observe that bilingual models are insufficient for paraphrasing and are often unable to produce the output in the correct language. We incorporate an autoencoding objective which simplifies and stabilizes training, and embedding-based outputs improving the diversity in paraphrasing.", "cite_spans": [ { "start": 66, "end": 94, "text": "(Barzilay and McKeown, 2001;", "ref_id": "BIBREF2" }, { "start": 95, "end": 121, "text": "Ganitkevitch et al., 2013;", "ref_id": "BIBREF10" }, { "start": 122, "end": 143, "text": "Pavlick et al., 2015;", "ref_id": "BIBREF35" }, { "start": 144, "end": 168, "text": "Mallinson et al., 2017a)", "ref_id": "BIBREF27" }, { "start": 263, "end": 285, "text": "(Wieting et al., 2017)", "ref_id": "BIBREF49" }, { "start": 288, "end": 307, "text": "Iyyer et al. (2018)", "ref_id": "BIBREF18" }, { "start": 384, "end": 402, "text": "(Hu et al., 2019a)", "ref_id": "BIBREF15" }, { "start": 478, "end": 496, "text": "(Hu et al., 2019b)", "ref_id": "BIBREF16" }, { "start": 752, "end": 769, "text": "Guo et al. (2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We present PARAvMF, an end-to-end model for generating paraphrases, trained solely with bilingual data, without any paraphrase supervision. We propose to generate paraphrases into meaning 7 We use 4K English sentences subsampled (\u223c0.1% of the training data) from the same corpus for autoencoding. To further discourage copying, we use denoised autoencoding (Lample et al., 2018) . 8 We discarded around 53 samples with no clear majority among the annotator ratings and report the results on the remaining samples, further ignoring cases where the paraphrases from both the models were rated to be of similar quality. A System diagram", "cite_spans": [ { "start": 188, "end": 189, "text": "7", "ref_id": null }, { "start": 357, "end": 378, "text": "(Lample et al., 2018)", "ref_id": "BIBREF25" }, { "start": 381, "end": 382, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The PARAvMF system is represented diagrammatically in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 62, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Sample outputs of the PARAvMF and PARACE models are shown in table 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Example outputs", "sec_num": null }, { "text": "To measure the impact of the size of parallel translation data used for training, we conduct an experiment with a larger French-English corpus constructed using a 2M sentence-pair subset of the combination of the WMT'10 Gigaword (Tiedemann, 2012) and the OpenSubtitles corpora (Lison and Tiedemann, 2016) . The semantic similarity scores and the diversity results are presented in table 4. The results of human evaluation are presented in the main paper.", "cite_spans": [ { "start": 277, "end": 304, "text": "(Lison and Tiedemann, 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "C Training on a Larger Translation Dataset", "sec_num": null }, { "text": "We evaluate the PARAvMF model (trained on English-French two-way translation data and English autoencoding data from the IWSLT'16 dataset) on test data sampled from PARANMT-50M , to demonstrate its paraphrasing ability on out-of-domain input, in addition to enabling direct comparison with backtranslated data, as shown in table 3. However, it is to be noted that the comparison is not a fair one, since PARAvMF is trained on just 220K data samples, wherease PARANMT is back-translated using a translation model that was trained on a bilingual dataset with a size of around 70M .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Evaluation on PARANMT-50M Test Set", "sec_num": null }, { "text": "We proposed three changes in a multilingual MT setup to use bilingual data for paraphrasing, (1) predicting continuous outputs and training with vMF loss, (2) language-specific start tokens in the encoder, and (3) an autoencoding objective. In the results section of the main paper, by comparing our method to PARACE, we already established the importance of using vMF compared to crossentropy. As shown in table 7, ablating either of the other remaining two components leads to considerable performance drop. This is because the ablated models generate outputs in L 2 since they are never exposed to monolingual examples during training. Additional, in our preliminary experiments, we also observe that increasing the size of autoencoding data too much beyond \u223c1% of the size of parallel translation data leads to a performance drop because the model just starts to learn to copy the input as-is rather than rephrasing. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E Ablation", "sec_num": null }, { "text": "It 's expensive , it takes a long time , and it 's very complicated .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "It 's expensive takes a time , and it 's very complicated . PARAvMF It 's costly , It takes a long time , and it 's very difficult .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PARACE", "sec_num": null }, { "text": "These are things to talk about and think about now , with your family and your loved ones .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "These are things to talk about and think about now , with your family and your loved ones . PARAvMF These are things to speak of and think of now , with your family and the ones you love. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PARACE", "sec_num": null }, { "text": "And this work has been wonderful . It 's been great .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "And this work has been wonderful . It 's been great . PARAvMF This work has been wonderful and great .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PARACE", "sec_num": null }, { "text": "I wasn 't doing anything that was out of the ordinary at all . PARACE I wasn 't doing anything that was out of the regular regular at all . PARAvMF I was doing nothing that was not ordinary .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "It will make tons of people watch , because people want this experience .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "It will make tons of people watch , because people want this . PARAvMF Tonnes of people will look because they want this experience . Table 5 : Comparison of selected sample outputs for the IWSLT Test Set between PARAvMF model and the baselines. PARAvMF not only exhibits content preservation, but also demonstrates fluency as well as lexical and syntactic diversity. Figure 1 : The PARAvMF Model: The decoder generates continuous-valued vectors at each step. It is trained by minimizing von Mises-Fisher loss between the output vectors and the pre-trained embeddings of the target words. Start tokens signalling the target language are supplied to both the encoder and the decoder. The training data consists of translation samples, L 1 \u2194 L 2 and autoencoding samples, L 1 \u2192 L 1 . During testing, the word in the target vocabulary whose embedding is closest to the generated output in terms of cosine similarity is output.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 5", "ref_id": null }, { "start": 368, "end": 376, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "PARACE", "sec_num": null }, { "text": "PARACE 39 (27.3%) PARAvMF 104 (72.7%) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Votes (%)", "sec_num": null }, { "text": "The code is available at https://github.com/ monisha-jega/paraphrasing_embedding_ outputs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To bias the model against always decoding in the other language, unlike inJohnson et al. (2017); Tiedemann and Scherrer (2019), we provide a language-specific start token in the encoder input, in addition to the decoder input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We empirically determine this sample size to be \u223c1% of the total number of training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "ResultsAutomatic evaluation We observe in table 1 that PARAvMF outperforms all baselines in meaning-4 Before computing the PTED, we prune the tree to a max height of 3, and discard all the terminal nodes. We employ Stanford CoreNLP(Manning et al., 2014) for parsing and APTED algorithm for edit distance(Pawlik and Augsten, 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is expected as VMF has been shown to slightly underperform CE for translation in prior work(Kumar and Tsvetkov, 2019). Our training procedure with an autoencoding objective alleviates this issue in PARAvMF.6 Each judge is presented with a set of questions, each consisting of an input sentence and paraphrases generated by the two models as options, and is asked to choose the sentence that is fluent, meaning-preserving and offers a novel phrasing of the input. They are asked to choose neither if both sentences are dis-fluent and/or not able to preserve content. The options are shuffled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based upon work supported by the National Science Foundation (NSF) under Grants No. IIS2040926 and IIS2007960. The views and opinions of authors expressed herein do not necessarily state or reflect those of the NSF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "789--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 789-798, Melbourne, Australia.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "METEOR: An automatic metric for mt evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for mt evaluation with im- proved correlation with human judgments. In Pro- ceedings of the acl workshop on intrinsic and ex- trinsic evaluation measures for machine translation and/or summarization, pages 65-72.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extracting paraphrases from a parallel corpus", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Kathleen R. McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of the 39th Annual Meeting of the Associa- tion for Computational Linguistics, pages 50-57.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Semantic parsing via paraphrasing", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1415--1425", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415- 1425.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ask the right questions: Active question reformulation with reinforcement learning", "authors": [ { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Jannis", "middle": [], "last": "Bulian", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Gajewski", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Buck, Jannis Bulian, Massimiliano Cia- ramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. 2018. Ask the right questions: Active question reformulation with rein- forcement learning. In International Conference on Learning Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The iwslt 2016 evaluation campaign", "authors": [ { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Niehues", "middle": [], "last": "Jan", "suffix": "" }, { "first": "St\u00fcker", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Roldano", "middle": [], "last": "Cattoni", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" } ], "year": 2016, "venue": "International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauro Cettolo, Niehues Jan, St\u00fcker Sebastian, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2016. The iwslt 2016 evaluation campaign. In In- ternational Workshop on Spoken Language Transla- tion.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Answering the question you wish they had asked: The impact of paraphrasing for question answering", "authors": [ { "first": "Pablo", "middle": [], "last": "Duboue", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", "volume": "", "issue": "", "pages": "33--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pablo Duboue and Jennifer Chu-Carroll. 2006. An- swering the question you wish they had asked: The impact of paraphrasing for question answering. In Proceedings of the Human Language Technol- ogy Conference of the NAACL, Companion Volume: Short Papers, pages 33-36. Association for Compu- tational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Data augmentation for low-resource neural machine translation", "authors": [ { "first": "Marzieh", "middle": [], "last": "Fadaee", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "567--573", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567- 573.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Open question answering over curated and extracted knowledge bases", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "1156--1165", "other_ids": { "DOI": [ "10.1145/2623330.2623677" ] }, "num": null, "urls": [], "raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and ex- tracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, page 1156-1165. Association for Computing Machinery.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "PPDB: The paraphrase database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764. Association for Computa- tional Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Zero-shot paraphrase generation with multilingual language models", "authors": [ { "first": "Yinpeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yibo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03597" ] }, "num": null, "urls": [], "raw_text": "Yinpeng Guo, Yi Liao, Xin Jiang, Qing Zhang, Yibo Zhang, and Qun Liu. 2019. Zero-shot para- phrase generation with multilingual language mod- els. arXiv preprint arXiv:1911.03597.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A deep generative framework for paraphrase generation", "authors": [ { "first": "Ankush", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Prawaan", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Rai", "suffix": "" } ], "year": 2018, "venue": "The Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In The Thirty-Second AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Methods for using textual entailment in open-domain question answering", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Hickl", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "905--912", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain ques- tion answering. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Compu- tational Linguistics, pages 905-912. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sequence-to-sequence data augmentation for dialogue language understanding", "authors": [ { "first": "Yutai", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1234--1245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1234-1245. Association for Com- putational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "PARABANK: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation", "authors": [ { "first": "Edward", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6521--6528", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Edward Hu, Rachel Rudinger, Matt Post, and Ben- jamin Van Durme. 2019a. PARABANK: Monolin- gual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6521-6528.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Largescale, diverse, paraphrastic bitexts via sampling and clustering", "authors": [ { "first": "J", "middle": [ "Edward" ], "last": "Hu", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Holzenberger", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "44--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019b. Large- scale, diverse, paraphrastic bitexts via sampling and clustering. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 44-54. Association for Computa- tional Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Text simplification for reading assistance: A project note", "authors": [ { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Atsushi", "middle": [], "last": "Fujita", "suffix": "" }, { "first": "Tetsuro", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "Ryu", "middle": [], "last": "Iida", "suffix": "" }, { "first": "Tomoya", "middle": [], "last": "Iwakura", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Second International Workshop on Paraphrasing", "volume": "16", "issue": "", "pages": "9--16", "other_ids": { "DOI": [ "10.3115/1118984.1118986" ] }, "num": null, "urls": [], "raw_text": "Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text simplifica- tion for reading assistance: A project note. In Pro- ceedings of the Second International Workshop on Paraphrasing -Volume 16, page 9-16. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Adversarial example generation with syntactically controlled paraphrase networks", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1875--1885", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using paraphrasing and memory-augmented models to combat data sparsity in question interpretation with a virtual patient dialogue system", "authors": [ { "first": "Lifeng", "middle": [], "last": "Jin", "suffix": "" }, { "first": "David", "middle": [], "last": "King", "suffix": "" }, { "first": "Amad", "middle": [], "last": "Hussein", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Danforth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "13--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifeng Jin, David King, Amad Hussein, Michael White, and Douglas Danforth. 2018. Using paraphrasing and memory-augmented models to combat data spar- sity in question interpretation with a virtual patient dialogue system. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 13-23. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "", "volume": "5", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. volume 5, pages 339- 351.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "OpenNMT: Opensource toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL 2017, System Demonstrations", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, page 177-180. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Von Mises-Fisher loss for training sequence to sequence models with continuous outputs", "authors": [ { "first": "Sachin", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sachin Kumar and Yulia Tsvetkov. 2019. Von Mises- Fisher loss for training sequence to sequence mod- els with continuous outputs. In International Con- ference on Learning Representations.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Unsupervised machine translation using monolingual corpora only", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised ma- chine translation using monolingual corpora only. In International Conference on Learning Represen- tations.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Opensub-titles2016: Extracting large parallel corpora from movie and tv subtitles", "authors": [ { "first": "Pierre", "middle": [], "last": "Lison", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and tv subtitles.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Paraphrasing revisited with neural machine translation", "authors": [ { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "881--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017a. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Paraphrasing revisited with neural machine translation", "authors": [ { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "881--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017b. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893. Association for Computational Lin- guistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Generalized entropy regularization or: There's nothing special about label smoothing", "authors": [ { "first": "Clara", "middle": [], "last": "Meister", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Salesky", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6870--6886", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.615" ] }, "num": null, "urls": [], "raw_text": "Clara Meister, Elizabeth Salesky, and Ryan Cot- terell. 2020. Generalized entropy regularization or: There's nothing special about label smoothing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6870- 6886, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Paraphrase diversification using counterfactual debiasing", "authors": [ { "first": "Jinyeong", "middle": [], "last": "Yim", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "6883--6891", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinyeong Yim. 2019. Paraphrase diversification us- ing counterfactual debiasing. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6883-6891.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Learning semantic sentence embeddings using sequential pairwise discriminator", "authors": [ { "first": "Vinod", "middle": [ "Kumar" ], "last": "Badri Narayana Patro", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Kurmi", "suffix": "" }, { "first": "Vinay", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "", "middle": [], "last": "Namboodiri", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "2715--2729", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badri Narayana Patro, Vinod Kumar Kurmi, Sandeep Kumar, and Vinay Namboodiri. 2018. Learning se- mantic sentence embeddings using sequential pair- wise discriminator. pages 2715-2729.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Simple PPDB: A paraphrase database for simplification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "143--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Chris Callison-Burch. 2016. Simple PPDB: A paraphrase database for simplification. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 143-148.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "425--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 425-430. Association for Com- putational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Efficient computation of the tree edit distance", "authors": [ { "first": "Mateusz", "middle": [], "last": "Pawlik", "suffix": "" }, { "first": "Nikolaus", "middle": [], "last": "Augsten", "suffix": "" } ], "year": 2015, "venue": "ACM Trans. Database Syst", "volume": "40", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1145/2699485" ] }, "num": null, "urls": [], "raw_text": "Mateusz Pawlik and Nikolaus Augsten. 2015. Efficient computation of the tree edit distance. ACM Trans. Database Syst., 40(1).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Text simplification for language learners: A corpus analysis", "authors": [ { "first": "E", "middle": [], "last": "Sarah", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Petersen", "suffix": "" }, { "first": "", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2007, "venue": "Workshop on Speech and Language Technology in Education", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah E Petersen and Mari Ostendorf. 2007. Text sim- plification for language learners: A corpus analysis. In Workshop on Speech and Language Technology in Education.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Neural paraphrase generation with stacked residual LSTM networks", "authors": [ { "first": "Aaditya", "middle": [], "last": "Prakash", "suffix": "" }, { "first": "A", "middle": [], "last": "Sadid", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "V", "middle": [], "last": "Vivek", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Datla", "suffix": "" }, { "first": "Joey", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Oladimeji", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Farri", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek V. Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. CoRR, abs/1610.03098.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Investigating a generic paraphrase-based approach for relation extraction", "authors": [ { "first": "Lorenza", "middle": [], "last": "Romano", "suffix": "" }, { "first": "Milen", "middle": [], "last": "Kouylekov", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Lavelli", "suffix": "" } ], "year": 2006, "venue": "11th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase-based approach for relation ex- traction. In 11th Conference of the European Chap- ter of the Association for Computational Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Unsupervised paraphrasing without translation", "authors": [ { "first": "Aurko", "middle": [], "last": "Roy", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Paraphrase generation as zero-shot multilingual translation: Disentangling semantic similarity from lexical and syntactic diversity", "authors": [ { "first": "Brian", "middle": [], "last": "Thompson", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.04935" ] }, "num": null, "urls": [], "raw_text": "Brian Thompson and Matt Post. 2020. Paraphrase gen- eration as zero-shot multilingual translation: Disen- tangling semantic similarity from lexical and syntac- tic diversity. arXiv preprint arXiv:2008.04935.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Parallel data, tools and interfaces in opus", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Measuring semantic abstraction of multilingual NMT with paraphrase recognition and generation tasks", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP", "volume": "", "issue": "", "pages": "35--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann and Yves Scherrer. 2019. Measur- ing semantic abstraction of multilingual NMT with paraphrase recognition and generation tasks. In Pro- ceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP, pages 35-42.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Kaiser", "middle": [], "last": "", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, page 6000-6010.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "authors": [ { "first": "K", "middle": [], "last": "Ashwin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Vijayakumar", "suffix": "" }, { "first": "Ramprasath", "middle": [ "R" ], "last": "Cogswell", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Selvaraju", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "David", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Crandall", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ram- prasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search: Decoding diverse solutions from neural se- quence models.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A task in a suit and a tie: Paraphrase generation with semantic augmentation", "authors": [ { "first": "Su", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Nancy", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7176--7183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2019. A task in a suit and a tie: Para- phrase generation with semantic augmentation. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7176-7183.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Reflective decoding: Unsupervised paraphrasing and abductive reasoning", "authors": [ { "first": "Peter", "middle": [], "last": "West", "suffix": "" }, { "first": "Ximing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Jena", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.08566" ] }, "num": null, "urls": [], "raw_text": "Peter West, Ximing Lu, Ari Holtzman, Chan- dra Bhagavatula, Jena Hwang, and Yejin Choi. 2020. Reflective decoding: Unsupervised para- phrasing and abductive reasoning. arXiv preprint arXiv:2010.08566.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "PARANMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting and Kevin Gimpel. 2018. PARANMT- 50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. pages 451-462.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Learning paraphrastic sentence embeddings from back-translated bitext", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "274--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning paraphrastic sentence embeddings from back-translated bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 274-285. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Optimizing statistical machine translation for text simplification", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Quanze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Doc-Chat: An information retrieval approach for chatbot engines using unstructured documents", "authors": [ { "first": "Zhao", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Junwei", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jianshe", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "516--525", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao Yan, Nan Duan, Junwei Bao, Peng Chen, Ming Zhou, Zhoujun Li, and Jianshe Zhou. 2016. Doc- Chat: An information retrieval approach for chatbot engines using unstructured documents. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 516-525.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "An end-to-end generative architecture for paraphrase generation", "authors": [ { "first": "Qian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhouyuan", "middle": [], "last": "Huo", "suffix": "" }, { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Wenlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3132--3142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Yang, Zhouyuan Huo, Dinghan Shen, Yong Cheng, Wenlin Wang, Guoyin Wang, and Lawrence Carin. 2019. An end-to-end generative architec- ture for paraphrase generation. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3132-3142. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF2": { "content": "
spaces as opposed to discrete tokens. This leads to
significant improvements in quality and diversity
of paraphrasing over strong baselines.
", "text": "Diversity of meaning-preserving paraphrases compared to the test set. PARAvMF outperforms a strong baseline PARACE for both English and French, across all metrics for thresholds 0.85 and 0.9, and in IoU and WER for threshold of 0.95.", "html": null, "type_str": "table", "num": null }, "TABREF3": { "content": "
: Evaluation of paraphrase generation on the
PARANMT test set.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. BERTScore:
Evaluating text generation with bert. In Interna-
tional Conference on Learning Representations.
", "text": "", "html": null, "type_str": "table", "num": null }, "TABREF5": { "content": "", "text": "Evaluation of paraphrase generation with PARAvMF trained on 2M English-French sentence pairs. It outperforms a strong cross-entropy based baseline (PARACE) on semantic similarity and majority of diversity metrics.", "html": null, "type_str": "table", "num": null }, "TABREF7": { "content": "
ModelBLEUBS MET.
PARAvMF64.0 88.691.6
-encoder start token0.86 46.012.0
-autoencoding0.85 46.012.1
", "text": "PARAvMF outperforms the baseline in manual A/B testing (English).", "html": null, "type_str": "table", "num": null }, "TABREF8": { "content": "", "text": "Performance of PARAvMF without the proposed enhancements -removing either leads to a drastic performance drop", "html": null, "type_str": "table", "num": null } } } }