{ "paper_id": "I17-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:39:09.114179Z" }, "title": "MIPA: Mutual Information Based Paraphrase Acquisition via Bilingual Pivoting", "authors": [ { "first": "Tomoyuki", "middle": [], "last": "Kajiwara", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "kajiwara-tomoyuki@ed.tmu.ac.jp" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Metropolitan University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "komachi@tmu.ac.jp" }, { "first": "Daichi", "middle": [], "last": "Mochihashi", "suffix": "", "affiliation": {}, "email": "daichi@ism.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a pointwise mutual information (PMI) based approach for formalizing paraphrasability and propose a variant of PMI, called mutual information based paraphrase acquisition (MIPA), for paraphrase acquisition. Our paraphrase acquisition method first acquires lexical paraphrase pairs by bilingual pivoting and then reranks them by PMI and distributional similarity. The complementary nature of information from bilingual corpora and from monolingual corpora renders the proposed method robust. Experimental results show that the proposed method substantially outperforms bilingual pivoting and distributional similarity themselves in terms of metrics such as mean reciprocal rank, mean average precision, coverage, and Spearman's correlation.", "pdf_parse": { "paper_id": "I17-1009", "_pdf_hash": "", "abstract": [ { "text": "We present a pointwise mutual information (PMI) based approach for formalizing paraphrasability and propose a variant of PMI, called mutual information based paraphrase acquisition (MIPA), for paraphrase acquisition. Our paraphrase acquisition method first acquires lexical paraphrase pairs by bilingual pivoting and then reranks them by PMI and distributional similarity. The complementary nature of information from bilingual corpora and from monolingual corpora renders the proposed method robust. Experimental results show that the proposed method substantially outperforms bilingual pivoting and distributional similarity themselves in terms of metrics such as mean reciprocal rank, mean average precision, coverage, and Spearman's correlation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Paraphrases are useful for flexible language understanding in many NLP applications. For example, the usefulness of the paraphrase database PPDB (Ganitkevitch et al., 2013; Pavlick et al., 2015) , a publicly available largescale resource for lexical paraphrasing, has been reported for tasks such as learning word embeddings (Yu and Dredze, 2014) and semantic textual similarity (Sultan et al., 2015) . In PPDB, paraphrase pairs are acquired via word alignment on a bilingual corpus by a process called bilingual pivoting (Bannard and Callison-Burch, 2005) . Figure 1 shows an example of English language paraphrase acquisition using the German language as a pivot.", "cite_spans": [ { "start": 145, "end": 172, "text": "(Ganitkevitch et al., 2013;", "ref_id": "BIBREF9" }, { "start": 173, "end": 194, "text": "Pavlick et al., 2015)", "ref_id": "BIBREF23" }, { "start": 325, "end": 346, "text": "(Yu and Dredze, 2014)", "ref_id": "BIBREF28" }, { "start": 379, "end": 400, "text": "(Sultan et al., 2015)", "ref_id": "BIBREF25" }, { "start": 522, "end": 556, "text": "(Bannard and Callison-Burch, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 559, "end": 567, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although bilingual pivoting is widely used for paraphrase acquisition, it always includes noise Figure 1 : Paraphrase acquisition via bilingual pivoting (Ganitkevitch et al., 2013) .", "cite_spans": [ { "start": 153, "end": 180, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "due to unrelated word pairs caused by word alignment errors on the bilingual corpus. Distributional similarity, another well-known method for paraphrase acquisition, is free of alignment errors, but includes noise due to antonym pairs that share the same contexts on the monolingual corpus (Mohammad et al., 2013) .", "cite_spans": [ { "start": 290, "end": 313, "text": "(Mohammad et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we formalize the paraphrasability of paraphrase pairs acquired via bilingual pivoting using pointwise mutual information (PMI) and reduce the noise by reranking the pairs using distributional similarity. The proposed method extends Local PMI (Evert, 2005) , which is a variant of weighted PMI that aims to avoid lowfrequency bias in PMI, for paraphrase acquisition. Since bilingual pivoting and distributional similarity have different advantages and disadvantages, we combine them to construct a complementary paraphrase acquisition method, called mutual information based paraphrase acquisition (MIPA). Experimental results show that MIPA outperforms bilingual pivoting and distributional similarity themselves in terms of metrics such as mean reciprocal rank (MRR), mean average precision (MAP), coverage, and Spearman's correlation.", "cite_spans": [ { "start": 257, "end": 270, "text": "(Evert, 2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The contributions of our study are as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Bilingual pivoting-based lexical paraphrase acquisition is generalized using PMI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Lexical paraphrases are acquired robustly using both bilingual and monolingual corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We release our lexical paraphrase pairs 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Bilingual pivoting (Bannard and Callison-Burch, 2005 ) is a method used to acquire large-scale lexical paraphrases by two-level word alignment on a bilingual corpus. Bilingual pivoting employs a conditional paraphrase probability p(e 2 |e 1 ) as a paraphrasability measure, when word alignments exist between an English phrase e 1 and a foreign language phrase f , and between the foreign language phrase f and another English phrase e 2 on a bilingual corpus. It calculates the probability from an English phrase e 1 to another English phrase e 2 using word alignment probabilities p(f |e 1 ) and p(e 2 |f ); here, the foreign language phrase f is used as the pivot.", "cite_spans": [ { "start": 19, "end": 52, "text": "(Bannard and Callison-Burch, 2005", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Bilingual Pivoting", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(e 2 |e 1 ) = \u2211 f p(e 2 |f, e 1 ) p(f |e 1 ) \u2248 \u2211 f p(e 2 |f ) p(f |e 1 )", "eq_num": "(1)" } ], "section": "Bilingual Pivoting", "sec_num": "2" }, { "text": "It assumes conditional independence of e 1 and e 2 given f , so that the equation above can be estimated easily using phrase-based statistical machine translation models. One of its advantages is that it requires only two translation models to acquire paraphrases on a large scale. However, since the conditional probability is asymmetric, it may introduce irrelevant paraphrases that do not hold the same meaning as the original one. In addition, owing to the data sparseness problem in the bilingual corpus, paraphrase probabilities may be overestimated for low-frequency word pairs. To mitigate this, PPDB (Ganitkevitch et al., 2013) defined the symmetric paraphrase score s bp (e 1 , e 2 ) using bi-directional bilingual pivoting. s bp (e 1 , e 2 ) = \u2212\u03bb 1 log p(e 2 |e 1 ) \u2212 \u03bb 2 log p(e 1 |e 2 )", "cite_spans": [ { "start": 609, "end": 636, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Bilingual Pivoting", "sec_num": "2" }, { "text": "(2) Unlike Equation (1), s bp enforces mutual paraphrasability of e 1 and e 2 . As discussed later, this does not necessarily increase the performance of paraphrase acquisition, because the symmetric constraint may be too strict to allow the extraction of broad-coverage paraphrases. In this study, 1 https://github.com/tmu-nlp/pmi-ppdb without loss of generality, we set 2 \u03bb 1 = \u03bb 2 = \u22121. s bp (e 1 , e 2 ) = log p(e 2 |e 1 ) + log p(e 1 |e 2 ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Pivoting", "sec_num": "2" }, { "text": "Although these paraphrase acquisition methods can extract large-scale paraphrase knowledge, the results may contain many fragments caused by word alignment error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Pivoting", "sec_num": "2" }, { "text": "To mitigate overestimation, we acquire lexical paraphrases with the conditional paraphrase probability by using Kneser-Ney smoothing (Kneser and Ney, 1995) and reranking them using information theoretic measure from a bilingual corpus and distributional similarity calculated from a large-scale monolingual corpus.", "cite_spans": [ { "start": 133, "end": 155, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "MIPA: Mutual Information Based Paraphrase Acquisition", "sec_num": "3" }, { "text": "Since bilingual pivoting adopts the conditional probability p(e 2 |e 1 ) as paraphrasability, we can mitigate the problem of overestimation by applying a smoothing method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "In the hierarchical Bayesian model, the conditional probability p(y |x) is expressed using the Dirichlet distribution with parameter \u03b1 y and maximum likelihood estimationp y | x as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y |x) = n(y |x) + \u03b1 y \u2211 y (n(y |x) + \u03b1 y ) \u2243 n(y |x) n(x) + \u2211 y \u03b1 y \u2235 \u03b1 y \u226a 1 = n(x) n(x) + \u2211 y \u03b1 y \u2022 n(y |x) n(x) = n(x) n(x) + \u2211 y \u03b1 y \u2022p y | x", "eq_num": "(4)" } ], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "Here, n(x) indicates the frequency of a word x and n(y |x) indicates the co-occurrence frequency of word y following x. As \u2211 y \u03b1 y is too large to be ignored, especially when the frequency n(x) is small, Equation (4) shows that the maximum likelihood estimationp y | x estimates the probability to be excessively large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "Therefore, we propose using Kneser-Ney smoothing (Kneser and Ney, 1995) , which is considered to be an extension of the Dirichlet smoothing above, to mitigate overestimation of paraphrase probability in bilingual pivoting.", "cite_spans": [ { "start": 49, "end": 71, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "p KN (e 2 |e 1 ) = n(e 2 |e 1 ) \u2212 \u03b4 n(e 1 ) + \u03b3(e 1 )p KN (e 2 ) \u03b4 = N 1 N 1 + 2N 2 \u03b3(e 1 ) = \u03b4 n(e 1 ) N (e 1 ) (5) p KN (e 2 ) = N (e 2 ) \u2211 i N (e i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "Here, N n indicates the number of types of word pairs of frequency n and N (e 1 ) indicates the number of types of paraphrase candidates of word e 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Smoothing of Bilingual Pivoting", "sec_num": "3.1" }, { "text": "The bi-directional bilingual pivoting of PPDB (Ganitkevitch et al., 2013) constrains paraphrase acquisition to be strictly symmetric. However, although it is extremely effective for extracting synonymous expressions, it tends to give high scores to frequent but irrelevant phrases, since bilingual pivoting itself contains noisy phrase pairs because of word alignment errors.", "cite_spans": [ { "start": 46, "end": 73, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "To address the problem of frequent phrases, we smooth paraphrasability by bilingual pivoting in Equation 3using word probabilities p(e 1 ) and p(e 2 ) from a monolingual corpus that is sufficiently larger than the bilingual corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s pmi (e 1 , e 2 ) = log p(e 2 |e 1 ) + log p(e 1 |e 2 ) \u2212 log p(e 1 ) \u2212 log p(e 2 )", "eq_num": "(6)" } ], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "Thus, we can interpret the bi-directional bilingual pivoting as an unsmoothed version of PMI. Since the difference in the logarithms of the numerator and denominator is equal to the logarithm of the quotient, we can transform Equation 6as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s pmi (e 1 , e 2 ) = log p(e 2 |e 1 ) p(e 2 ) + log p(e 1 |e 2 ) p(e 1 ) = 2PMI(e 1 , e 2 )", "eq_num": "(7)" } ], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "since we can transform PMI into the following forms using Bayes' theorem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "PMI(x, y) = log p(x, y) p(x)p(y) (8) = log p(y |x)p(x) p(x)p(y) = log p(y |x) p(y) = log p(x|y)p(y) p(x)p(y) = log p(x|y) p(x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "Plugging Equation 8into Equation 7, we can interpret PMI as a geometric mean of two models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "PMI(x, y) = 1 2 PMI(x, y) + 1 2 PMI(x, y) (9) = 1 2 log p(y |x) p(y) + 1 2 log p(x|y) p(x) = log [ { p(y |x) p(y) } 1 2 \u2022 { p(x|y) p(x) } 1 2 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "Bilingual pivoting in Equation 1can be regarded as a mixture model that considers only the e 1 \u2192 e 2 direction. However, as shown in Equation (9), our proposed method can be regarded as a product model (Hinton, 2002) that considers both directions. PPDB (Pavlick et al., 2015) also considers the paraphrase probability in both directions, but the authors did not regard it as a product model; instead the paraphrase probability in each direction is treated as one of the features of supervised learning.", "cite_spans": [ { "start": 202, "end": 216, "text": "(Hinton, 2002)", "ref_id": "BIBREF12" }, { "start": 254, "end": 276, "text": "(Pavlick et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Generalization of Bilingual Pivoting using Mutual Information", "sec_num": "3.2" }, { "text": "In low-frequency word pairs, it is well-known that PMI becomes unreasonably large because of coincidental co-occurrence. In order to avoid this problem, Evert 2005proposed Local PMI, which assigns weights to PMI depending on the cooccurrence frequency of word pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Distributional Similarity", "sec_num": "3.3" }, { "text": "LocalPMI(x, y) = n(x, y) \u2022 PMI(x, y) (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Distributional Similarity", "sec_num": "3.3" }, { "text": "In this study, however, it was difficult to directly calculate the weight corresponding to n(x, y) in Equation (10) on the bilingual corpus. Furthermore, our aim was to calculate not the strength of co-occurrence (relation) between words, but the paraphrasability. Therefore, it is not appropriate to count the co-occurrence frequency on a monolingual corpus such as Local PMI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Distributional Similarity", "sec_num": "3.3" }, { "text": "Alternatively, we use as a weight the distributional similarity, which is frequently used for paraphrase acquisition from a monolingual corpus (Chan et al., 2011; Glava\u0161 and\u0160tajner, 2015). s lpmi (e 1 , e 2 ) = cos(e 1 , e 2 ) \u2022 s pmi (e 1 , e 2 ) = cos(e 1 , e 2 ) \u2022 2PMI(e 1 , e 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Distributional Similarity", "sec_num": "3.3" }, { "text": "Here, cos(e 1 , e 2 ) indicates cosine similarity between vector representations of word e 1 and word e 2 . Equation (11) simultaneously considers paraphrasability based on the monolingual corpus (distributional similarity) and on the bilingual corpus (bilingual pivoting). Distributional similarity, as opposed to bilingual pivoting, is robust against noise associated with unrelated word pairs. Bilingual pivoting is robust against noise arising from antonym pairs, unlike distributional similarity. Therefore, s lpmi (e 1 , e 2 ) can perform paraphrase acquisition robustly by compensating the disadvantages. Hereinafter, we refer to s lpmi (e 1 , e 2 ) as MIPA, mutual information based paraphrase acquisition via bilingual pivoting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Distributional Similarity", "sec_num": "3.3" }, { "text": "We used French-English parallel data 4 from Europarl-v7 (Koehn, 2005 ) and GIZA++ (Och and Ney, 2003) (IBM model 4) to calculate the conditional paraphrase probability p(e 2 |e 1 ) and p(e 1 |e 2 ). We also used the English Gigaword 5th Edition 5 and KenLM (Heafield, 2011) to calculate the word probability p(e 1 ) and p(e 2 ). For cos(e 1 , e 2 ), we used the CBOW model 6 of word2vec (Mikolov et al., 2013a) . Finally, we acquired paraphrase candidates of 170,682,871 word pairs, excepting the paraphrase of itself (e 1 = e 2 ).", "cite_spans": [ { "start": 56, "end": 68, "text": "(Koehn, 2005", "ref_id": "BIBREF14" }, { "start": 257, "end": 273, "text": "(Heafield, 2011)", "ref_id": "BIBREF11" }, { "start": 387, "end": 410, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.1" }, { "text": "We employed the conditional paraphrase probability of bilingual pivoting given in Equation (1), the symmetric paraphrase score of PPDB given by Equation (3), and distributional similarity as baselines, and compared them with PMI shown in Equation (7) and the MIPA score given in Equation (11). Note that distributional similarity im-plies that the paraphrase pairs acquired via bilingual pivoting were reranked by distributional similarity rather than by using the top-k distributionally similar words among all the vocabularies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.1" }, { "text": "For evaluation, we used two datasets included in Human Paraphrase Judgments 7 published by Pavlick et al. (2015) ; hereafter, we call these datasets HPJ-Wikipedia and HPJ-PPDB, respectively.", "cite_spans": [ { "start": 91, "end": 112, "text": "Pavlick et al. (2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Datasets and Metrics", "sec_num": "4.2" }, { "text": "First, Human Paraphrase Judgments includes a paraphrase list of 100 words or phrases randomly extracted from Wikipedia and processed using a five-step manual evaluation for each paraphrase pair (HPJ-Wikipedia). A correct paraphrase is a word that gained three or more evaluations in manual evaluation. We used this dataset to evaluate the acquired paraphrase pairs by MRR and MAP, following Pavlick et al. (2015) . Furthermore, we evaluated the coverage of the top-k paraphrase pairs. Function words such as \"as\" have more than 50,000 types of paraphrase candidates, because they are sensitive to word alignment errors in bilingual pivoting. However, since many of these paraphrase candidates are word pairs that are not in fact paraphrases, we evaluated the coverage in terms of the extent to which they can reduce unnecessary candidates while preserving the correct paraphrases.", "cite_spans": [ { "start": 391, "end": 412, "text": "Pavlick et al. (2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Datasets and Metrics", "sec_num": "4.2" }, { "text": "Second, Human Paraphrase Judgments also includes a five-step manual evaluation of 26,456 word pairs sampled from PPDB (Ganitkevitch et al., 2013 ) (HPJ-PPDB) together with the paraphrase list of 100 words. We used this dataset to evaluate the overall paraphrase ranking based on Spearman's correlation coefficient, as in Pavlick et al. (2015) . Figures 2 and 3 show the effectiveness of adopting Kneser-Ney smoothing for bilingual pivoting in terms of MRR and MAP on HPJ-Wikipedia. The horizontal axis of each graph represents the evaluation of the paraphrase up to the top-k of the paraphrase score. The results confirm that the ranking of paraphrases acquired via bilingual pivoting was improved by applying Kneser-Ney smoothing. In the rest of this study, we always applied Kneser-Ney smoothing to conditional paraphrase probability.", "cite_spans": [ { "start": 118, "end": 144, "text": "(Ganitkevitch et al., 2013", "ref_id": "BIBREF9" }, { "start": 321, "end": 342, "text": "Pavlick et al. (2015)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 345, "end": 360, "text": "Figures 2 and 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation Datasets and Metrics", "sec_num": "4.2" }, { "text": "Figures 4 and 5 show the comparison of paraphrase rankings in MRR and MAP on HPJ-Wikipedia. The evaluation by MRR, shown in Figure 4 , demonstrates that the ranking performance of paraphrase pairs is improved by making bilingual pivoting symmetric. PMI slightly outperforms the baselines of bilingual pivoting below the top five. Furthermore, MIPA shows the highest performance, because reranking by distributional similarity greatly improves bilingual pivoting.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "The evaluation using MAP, shown in Figure 5 , also reinforced the same result, i.e., reranking by distribution similarity improved bilingual pivoting, and MIPA showed the highest performance. Figure 6 shows the coverage of the top-ranked paraphrases on HPJ-Wikipedia. Despite the fact that the symmetric paraphrase score is better than the conditional paraphrase probability in the ranking performance of the top three in MRR and MAP, it shows a poor performance in terms of coverage. Although there is not a significant difference between MIPA and the other methods, MIPA was shown to outperform them. Figures 7 and 8 show the scatter plots and Spearman's correlation coefficient of each paraphrase score and manual evaluation (average value of five evaluators) on HPJ-PPDB. As in the previous experimental results, MIPA showed a high correlation. In particular, the noise generated by false positives at the upper left of the scatter plot can be reduced by combining PMI and distributional similarity. Table 1 shows examples of the top 10 in paraphrase rankings. In the paraphrase examples of cultural, conditional paraphrase probability does not score the correct paraphrase as top-ranked words. Although the symmetric paraphrase score ranked the correct paraphrase at the top, words other than the top words are less reliable, as shown by the previous experimental results. PMI is strongly influenced by low-frequency words, and many of the top-ranked words are singleton words in the bilingual corpus. MIPA, in contrast, mitigates the problem of low-frequency bias, and many of the top-ranked words are correct paraphrases. Distributional similarity-based methods include relatively numerous correct paraphrases at the top, and the other top-ranked words are also strongly related to cultural. From the viewpoint of paraphrases, 3 of the top 10 words of the proposed method are incorrect, but these words may also be useful for applications such as learning word embeddings (Yu and Dredze, 2014) and semantic textual similarity (Sultan et al., 2015). Table 2 shows correct examples of the paraphrase rankings. In the paraphrase examples of labourers, there were 20 correct paraphrases that received a rating of 3 or higher in manual evaluation. With respect to the conditional paraphrase probability and PMI, it is necessary to consider up to the 400th place to cover all correct paraphrases. However, distributional similarity-based methods have correct paraphrases of higher rank. In particular, MIPA was able to include 10 words of correct paraphrases in the top 20 words; that is, our method can obtain paraphrases with high coverage by using only the highly ranked paraphrases. p(e 2 |e 1 ) log p(e 2 |e 1 ) + log p(e 1 |e 2 ) 2PMI(e 1 , e 2 ) cos(e 1 , e 2 ) cos(e 1 , e 2 )2PMI(e 1 , e 2 ) 1. workers 9. Table 3 : Evaluation by Pearson's correlation coefficient in semantic textual similarity task.", "cite_spans": [ { "start": 1979, "end": 2000, "text": "(Yu and Dredze, 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 192, "end": 200, "text": "Figure 6", "ref_id": "FIGREF4" }, { "start": 603, "end": 618, "text": "Figures 7 and 8", "ref_id": "FIGREF5" }, { "start": 1004, "end": 1011, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 2056, "end": 2063, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 2816, "end": 2823, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Next, we applied the acquired paraphrase pairs to the semantic textual similarity task and evaluated the extent to which the acquired paraphrases improve downstream applications. The semantic textual similarity task deals with calculating the semantic similarity between two sentences. In this study, we conducted the evaluation by applying Pearson's correlation coefficient with a five-step manual evaluation using five datasets constructed by SemEval (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 (Agirre et al., , 2016 . We applied the acquired paraphrase pairs to the unsupervised method of DLC@CU (Sultan et al., 2015) , which achieved excellent results using PPDB in the semantic textual similarity task of SemEval-2015 (Agirre et al., 2015) . DLS@CU performs word alignment (Sultan et al., 2014) using PPDB, and calculates sentence similarity according to the ratio of aligned words:", "cite_spans": [ { "start": 453, "end": 473, "text": "(Agirre et al., 2012", "ref_id": "BIBREF3" }, { "start": 474, "end": 496, "text": "(Agirre et al., , 2013", "ref_id": "BIBREF4" }, { "start": 497, "end": 519, "text": "(Agirre et al., , 2014", "ref_id": "BIBREF1" }, { "start": 520, "end": 542, "text": "(Agirre et al., , 2015", "ref_id": "BIBREF0" }, { "start": 543, "end": 565, "text": "(Agirre et al., , 2016", "ref_id": "BIBREF2" }, { "start": 646, "end": 667, "text": "(Sultan et al., 2015)", "ref_id": "BIBREF25" }, { "start": 770, "end": 791, "text": "(Agirre et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sts(s 1 , s 2 ) = n a (s 1 ) + n a (s 2 ) n(s 1 ) + n(s 2 )", "eq_num": "(12)" } ], "section": "Quantitative Analysis", "sec_num": "5.2" }, { "text": "Here, n(s) indicates the number of words in sentence s and n a (s) indicates the number of aligned words. Although DLS@CU targets all the paraphrases of PPDB, we used only the top 10 words of the paraphrase score for each target word and compared the performance of the paraphrase scores. Table 3 shows the experimental results of the semantic textual similarity task. \"ALL\" is the weighted mean value of the Pearson's correlation coefficient over the five datasets. MIPA achieved the highest performance on three out of the five datasets. In other words, the proposed method extracted paraphrase pairs useful for calculating sentence similarity at the top-rank.", "cite_spans": [], "ref_spans": [ { "start": 289, "end": 296, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Quantitative Analysis", "sec_num": "5.2" }, { "text": "Finally, we reranked paraphrase pairs from a publicly available state-of-the-art paraphrase database. 8 PPDB 2.0 (Pavlick et al., 2015) scores paraphrase pairs using supervised learning with 26,455 labeled data and 209 features. We sorted the paraphrase pairs from PPDB 2.0 using the MIPA instead of the PPDB 2.0 score and used the same evaluation means as described in Section 4. Surprisingly, our unsupervised approach outperformed the paraphrase ranking performance of PPDB 2.0's supervised approach in terms of MRR ( Figure 9 ) and MAP ( Figure 10 ).", "cite_spans": [ { "start": 113, "end": 135, "text": "(Pavlick et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 521, "end": 529, "text": "Figure 9", "ref_id": "FIGREF7" }, { "start": 542, "end": 551, "text": "Figure 10", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Reranking PPDB 2.0", "sec_num": "5.3" }, { "text": "Levy and Goldberg (2014) explained a wellknown representation learning method for word embeddings, the skip-gram with negativesampling (SGNS) (Mikolov et al., 2013a,b) , as a matrix factorization of a word-context co-occurrence matrix with shifted positive PMI. In this paper, we explained a well-known method for paraphrase acquisition, bilingual pivoting (Bannard and Callison-Burch, 2005; Ganitkevitch et al., 2013) , as a (weighted) PMI.", "cite_spans": [ { "start": 142, "end": 167, "text": "(Mikolov et al., 2013a,b)", "ref_id": null }, { "start": 357, "end": 391, "text": "(Bannard and Callison-Burch, 2005;", "ref_id": "BIBREF5" }, { "start": 392, "end": 418, "text": "Ganitkevitch et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Chan et al. (2011) reranked paraphrase pairs acquired via bilingual pivoting using distributional similarity. The main idea of reranking paraphrase pairs using information from a monolingual corpus is similar to ours, but Chan et al.'s method failed to acquire semantically similar paraphrases. We succeeded in acquiring semantically similar paraphrases because we effectively combined information from a bilingual corpus and a monolingual corpus by using weighted PMI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In addition to English, paraphrase databases are constructed in many languages using bilingual pivoting (Bannard and Callison-Burch, 2005 ). Ganitkevitch and Callison-Burch (2014) constructed paraphrase databases 8 in 23 languages, including European languages and Chinese.", "cite_spans": [ { "start": 104, "end": 137, "text": "(Bannard and Callison-Burch, 2005", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Furthermore, Mizukami et al. (2014) constructed the Japanese version 9 . In this study, we improved bilingual pivoting using a monolingual corpus. Since large-scale monolingual corpora are easily available for many languages, our proposed method may improve paraphrase databases in each of these languages.", "cite_spans": [ { "start": 13, "end": 35, "text": "Mizukami et al. (2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "PPDB (Ganitkevitch et al., 2013) constructed by bilingual pivoting is used in many NLP applications, such as learning word embeddings (Yu and Dredze, 2014) , semantic textual similarity (Sultan et al., 2015), machine translation (Mehdizadeh Seraj et al., 2015) , sentence compression , question answering (Sultan et al., 2016) , and text simplification (Xu et al., 2016) . Our proposed method may improve the performance of many of these NLP applications supported by PPDB.", "cite_spans": [ { "start": 5, "end": 32, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF9" }, { "start": 134, "end": 155, "text": "(Yu and Dredze, 2014)", "ref_id": "BIBREF28" }, { "start": 229, "end": 260, "text": "(Mehdizadeh Seraj et al., 2015)", "ref_id": "BIBREF16" }, { "start": 305, "end": 326, "text": "(Sultan et al., 2016)", "ref_id": "BIBREF26" }, { "start": 353, "end": 370, "text": "(Xu et al., 2016)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We proposed a new approach for formalizing lexical paraphrasability based on weighted PMI and acquired paraphrase pairs using information from both a bilingual corpus and a monolingual corpus. Our proposed method, MIPA, uses bilingual pivoting weighted by distributional similarity to acquire paraphrase pairs robustly, as each of the methods complements the other. Experimental results using manually annotated datasets for lexical paraphrase showed that the proposed method outperformed bilingual pivoting and distributional similarity in terms of metrics such as MRR, MAP, coverage, and Spearman's correlation. We also confirmed the effectiveness of the proposed method by conducting an extrinsic evaluation on a semantic textual similarity task. In addition to the semantic textual similarity task, we hope to improve the performance of many NLP applications based on the results of this study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "PPDB 3 : \u03bb1 = \u03bb2 = 1 3 http://www.cis.upenn.edu/\u02dcccb/ppdb/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/europarl/ 5 https://catalog.ldc.upenn.edu/LDC2011T07 6 https://code.google.com/archive/p/word2vec/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.seas.upenn.edu/\u02dcepavlick/data.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://paraphrase.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://ahclab.naist.jp/resource/jppdb/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was (partly) supported by Grantin-Aid for Research on Priority Areas, Tokyo Metropolitan University, Research on social bigdata.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "and Pilot on Interpretability", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Montse", "middle": [], "last": "Maritxalar", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Larraitz", "middle": [], "last": "Uria", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "252--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 Task 2: Semantic Tex- tual Similarity, English, Spanish and Pilot on Inter- pretability. In Proceedings of the 9th International Workshop on Semantic Evaluation. pages 252-263.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SemEval-2014 Task 10: Multilingual Semantic Textual Similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "81--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation. pages 81-91.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "497--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation. In Proceedings of the 10th International Workshop on Semantic Eval- uation. pages 497-511.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics. pages 385-393.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Second Joint Conference on Lexical and Computational Semantics", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "32--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic Textual Similarity. In Second Joint Conference on Lexical and Computational Seman- tics. pages 32-43.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Paraphrasing with Bilingual Parallel Corpora", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "597--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with Bilingual Parallel Corpora. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics. pages 597-604.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Reranking Bilingually Extracted Paraphrases Using Monolingual Distributional Similarity", "authors": [ { "first": "Chris", "middle": [], "last": "Tsz Ping Chan", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", "volume": "", "issue": "", "pages": "33--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsz Ping Chan, Chris Callison-Burch, and Benjamin Van Durme. 2011. Reranking Bilingually Extracted Paraphrases Using Monolingual Distributional Sim- ilarity. In Proceedings of the GEMS 2011 Work- shop on GEometrical Models of Natural Language Semantics. pages 33-42.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Statistics of Word Cooccurrences: Word Pairs and Collocations", "authors": [ { "first": "Stefan", "middle": [ "Evert" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Evert. 2005. The Statistics of Word Cooccur- rences: Word Pairs and Collocations. Ph.D. thesis, University of Stuttgart.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Multilingual Paraphrase Database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "4276--4283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch and Chris Callison-Burch. 2014. The Multilingual Paraphrase Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation. pages 4276-4283.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "PPDB: The Paraphrase Database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. pages 758-764.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Simplifying Lexical Simplification: Do We Need Simplified Corpora?", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Sanja\u0160tajner. 2015. Simplifying Lexical Simplification: Do We Need Simplified Cor- pora? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pages 63-68.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "KenLM: Faster and Smaller Language Model Queries", "authors": [ { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "187--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Heafield. 2011. KenLM: Faster and Smaller Language Model Queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation. pages 187-197.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Training Products of Experts by Minimizing Contrastive Divergence", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2002, "venue": "Neural Computation", "volume": "14", "issue": "8", "pages": "1771--1800", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton. 2002. Training Products of Ex- perts by Minimizing Contrastive Divergence. Neu- ral Computation 14(8):1771-1800.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improved Backing-off for M-gram Language Modeling", "authors": [ { "first": "Reinhard", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "1", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved Backing-off for M-gram Language Modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. vol- ume 1, pages 181-184.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Machine Translation Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the Machine Translation Summit. pages 79-86.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural Word Embedding as Implicit Matrix Factorization", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2177--2185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural Word Embedding as Implicit Matrix Factorization. In Ad- vances in Neural Information Processing Systems. pages 2177-2185.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving Statistical Machine Translation with a Multilingual Paraphrase Database", "authors": [ { "first": "Maryam", "middle": [], "last": "Ramtin Mehdizadeh Seraj", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Siahbani", "suffix": "" }, { "first": "", "middle": [], "last": "Sarkar", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1379--1390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramtin Mehdizadeh Seraj, Maryam Siahbani, and Anoop Sarkar. 2015. Improving Statistical Ma- chine Translation with a Multilingual Paraphrase Database. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing. pages 1379-1390.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Workshop at the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Efficient Estimation of Word Repre- sentations in Vector Space. In Proceedings of Work- shop at the International Conference on Learning Representations. pages 1-12.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributed Representations of Words and Phrases and Their Compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed Representa- tions of Words and Phrases and Their Composition- ality. In Advances in Neural Information Processing Systems. pages 3111-3119.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building a Free, General-Domain Paraphrase Database for Japanese", "authors": [ { "first": "Masahiro", "middle": [], "last": "Mizukami", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 17th Oriental COCOSDA Conference", "volume": "", "issue": "", "pages": "129--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masahiro Mizukami, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Build- ing a Free, General-Domain Paraphrase Database for Japanese. In Proceedings of the 17th Oriental COCOSDA Conference. pages 129 -133.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentential Paraphrasing as Black-Box Machine Translation", "authors": [ { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "62--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Courtney Napoles, Chris Callison-Burch, and Matt Post. 2016. Sentential Paraphrasing as Black-Box Machine Translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics. pages 62-66.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Systematic Comparison of Various Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A System- atic Comparison of Various Statistical Alignment Models. Computational Linguistics 29(1):19-51.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Pushpendre", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "425--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing. pages 425- 430.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Back to Basics for Monolingual Alignment: Exploiting Word Similarity and Contextual Evidence", "authors": [ { "first": "Steven", "middle": [], "last": "Md Arafat Sultan", "suffix": "" }, { "first": "Tamara", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "", "middle": [], "last": "Sumner", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "219--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Arafat Sultan, Steven Bethard, and Tamara Sum- ner. 2014. Back to Basics for Monolingual Align- ment: Exploiting Word Similarity and Contextual Evidence. Transactions of the Association for Com- putational Linguistics 2:219-230.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "DLS@CU: Sentence Similarity from Word Alignment and Semantic Vector Composition", "authors": [ { "first": "Steven", "middle": [], "last": "Md Arafat Sultan", "suffix": "" }, { "first": "Tamara", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "", "middle": [], "last": "Sumner", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "148--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Arafat Sultan, Steven Bethard, and Tamara Sum- ner. 2015. DLS@CU: Sentence Similarity from Word Alignment and Semantic Vector Composition. In Proceedings of the 9th International Workshop on Semantic Evaluation. pages 148-153.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Joint Model for Answer Sentence Ranking and Answer Extraction", "authors": [ { "first": "Vittorio", "middle": [], "last": "Md Arafat Sultan", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Castelli", "suffix": "" }, { "first": "", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "113--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Arafat Sultan, Vittorio Castelli, and Radu Florian. 2016. A Joint Model for Answer Sentence Ranking and Answer Extraction. Transactions of the Associ- ation for Computational Linguistics 4:113-125.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Optimizing Statistical Machine Translation for Text Simplification", "authors": [ { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Quanze", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "401--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing Statistical Machine Translation for Text Simplifica- tion. Transactions of the Association for Computa- tional Linguistics 4:401-415.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Improving Lexical Embeddings with Semantic Knowledge", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "545--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu and Mark Dredze. 2014. Improving Lexical Embeddings with Semantic Knowledge. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics. pages 545-550.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Effectiveness of smoothing of bilingual pivoting evaluated by paraphrase ranking in MRR.", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Effectiveness of smoothing of bilingual pivoting evaluated by paraphrase ranking in MAP.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Paraphrase ranking in MRR.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "Paraphrase ranking in MAP.", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "Coverage of the top-k paraphrase pairs.", "num": null, "type_str": "figure", "uris": null }, "FIGREF5": { "text": "\u03c1 : log p(e 2 |e 1 ).", "num": null, "type_str": "figure", "uris": null }, "FIGREF6": { "text": "\u03c1 : MIPA(e 1 , e 2 ).", "num": null, "type_str": "figure", "uris": null }, "FIGREF7": { "text": "Reranking PPDB 2.0 in MRR.", "num": null, "type_str": "figure", "uris": null }, "FIGREF8": { "text": "Reranking PPDB 2.0 in MAP.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "text": "Paraphrase examples of cultural. Italicized words are the correct paraphrases.", "html": null, "content": "", "num": null, "type_str": "table" }, "TABREF3": { "text": "Correct paraphrase examples of labourers.", "html": null, "content": "
p(e 2 |e 1 ) log p(e 2 |e 1 ) + log p(e 1 |e 2 ) 2PMI(e 1 , e 2 ) cos(e 1 , e 2 ) cos(e 1 , e 2 )2PMI(e 1 , e 2 )
STS-20120.5390.4660.3830.3630.442
STS-20130.4890.4690.4630.4830.499
STS-20140.4640.4600.4710.4530.475
STS-20150.6110.6550.6600.6420.671
STS-20160.4440.5180.5500.5180.542
ALL0.5360.5430.5340.5230.555
", "num": null, "type_str": "table" } } } }