{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:29.908002Z" }, "title": "Measuring Similarity of Opinion-bearing Sentences", "authors": [ { "first": "Wenyi", "middle": [], "last": "Tay", "suffix": "", "affiliation": { "laboratory": "", "institution": "RMIT University", "location": { "country": "Australia" } }, "email": "wenyi.tay@rmit.edu.au" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "RMIT University", "location": { "country": "Australia" } }, "email": "xiuzhen.zhang@rmit.edu.au" }, { "first": "Stephen", "middle": [], "last": "Wan", "suffix": "", "affiliation": { "laboratory": "CSIRO Data61", "institution": "", "location": { "country": "Australia" } }, "email": "stephen.wan@data61.csiro.au" }, { "first": "Sarvnaz", "middle": [], "last": "Karimi", "suffix": "", "affiliation": { "laboratory": "CSIRO Data61", "institution": "", "location": { "country": "Australia" } }, "email": "sarvnaz.karimi@data61.csiro.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "For many NLP applications of online reviews, comparing two opinion-bearing sentences is the key. We argue that, while general purpose text similarity metrics have been applied for this purpose, there has been limited exploration of their applicability to opinion texts. We address this gap by studying: (1) how humans judge the similarity of pairs of opinionbearing sentences; and, (2) the degree to which existing text similarity metrics, particularly embedding-based ones, correspond to human judgments. We crowdsourced annotations for opinion sentence pairs and our main findings are: (1) annotators tend to agree on whether or not opinion sentences are similar or different; and (2) embedding-based metrics capture human judgments of \"opinion similarity\" but not \"opinion difference\". Based on our analysis, we identify areas where the current metrics should be improved. We further propose to learn a similarity metric for opinion similarity via fine-tuning the Sentence-BERT sentenceembedding network based on review text and weak supervision by review ratings. Experiments show that our learned metric outperforms existing text similarity metrics and especially show significantly higher correlations with human annotations for differing opinions.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "For many NLP applications of online reviews, comparing two opinion-bearing sentences is the key. We argue that, while general purpose text similarity metrics have been applied for this purpose, there has been limited exploration of their applicability to opinion texts. We address this gap by studying: (1) how humans judge the similarity of pairs of opinionbearing sentences; and, (2) the degree to which existing text similarity metrics, particularly embedding-based ones, correspond to human judgments. We crowdsourced annotations for opinion sentence pairs and our main findings are: (1) annotators tend to agree on whether or not opinion sentences are similar or different; and (2) embedding-based metrics capture human judgments of \"opinion similarity\" but not \"opinion difference\". Based on our analysis, we identify areas where the current metrics should be improved. We further propose to learn a similarity metric for opinion similarity via fine-tuning the Sentence-BERT sentenceembedding network based on review text and weak supervision by review ratings. Experiments show that our learned metric outperforms existing text similarity metrics and especially show significantly higher correlations with human annotations for differing opinions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Online reviews are an integral part of e-commerce platforms. Consumers utilize these reviews to make purchasing decisions, and businesses use this feedback to improve products or services. With the ever-growing number of reviews, NLP research has focused on methods to make sense of this vast data resource, including applications for opinion summarization (Suhara et al., 2020; Bra\u017einskas et al., 2020b; Mukherjee et al., 2020; Chu and Liu, 2019; Angelidis and Lapata, 2018) and opinion search (Poddar et al., 2017) .", "cite_spans": [ { "start": 357, "end": 378, "text": "(Suhara et al., 2020;", "ref_id": "BIBREF29" }, { "start": 379, "end": 404, "text": "Bra\u017einskas et al., 2020b;", "ref_id": "BIBREF7" }, { "start": 405, "end": 428, "text": "Mukherjee et al., 2020;", "ref_id": "BIBREF20" }, { "start": 429, "end": 447, "text": "Chu and Liu, 2019;", "ref_id": "BIBREF9" }, { "start": 448, "end": 475, "text": "Angelidis and Lapata, 2018)", "ref_id": "BIBREF3" }, { "start": 495, "end": 516, "text": "(Poddar et al., 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A key characteristic of text in this domain is that it contains opinion-bearing sentences (hereafter, \"opinion sentences\"). As in preceding work (Pontiki et al., 2016) , we view an opinion as having an aspect (e.g., the feature of a product or dimension of a service) and an appraisal (e.g., a positive or negative sentiment). In many applications, one needs to determine if two related opinion sentences are comparable in meaning. From an applied viewpoint, one might think of two opinions being comparable if they support the same recommendation, with respect to the relevant aspect. To compare two opinions, prior work has employed text similarity metrics, where cosine similarity based on TF-IDF (Angelidis and Lapata, 2018) or embedding representations (Suhara et al., 2020) is used to measure opinion sentence similarity.", "cite_spans": [ { "start": 145, "end": 167, "text": "(Pontiki et al., 2016)", "ref_id": "BIBREF24" }, { "start": 700, "end": 728, "text": "(Angelidis and Lapata, 2018)", "ref_id": "BIBREF3" }, { "start": 758, "end": 779, "text": "(Suhara et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We group existing text similarity metrics broadly into two types: lexical-and embedding-based approaches. The lexical-based approaches, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) , evaluate text by capturing the overlap in surface forms, such as n-grams of tokens. However, they are often ineffective when texts employ paraphrases or synonyms. Embedding-based metrics, such as Word Mover's Distance (WMD) (Kusner et al., 2015) , MoverScore (Zhao et al., 2019) , and Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) , typically relax the restriction of strict string matching by comparing continuous representations for words and sentences. Such approaches have shown to work well for various NLP applications, including areas involving comparison of sentence meaning; for example, paraphrase detection, question answering, or summarization (Wang et al., 2018; Lan and Xu, 2018; Suhara et al., 2020) . However, a detailed study investigating their relationship to corresponding human judgments of opinion sentences is lacking.", "cite_spans": [ { "start": 149, "end": 172, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF22" }, { "start": 183, "end": 194, "text": "(Lin, 2004)", "ref_id": "BIBREF17" }, { "start": 421, "end": 442, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF15" }, { "start": 456, "end": 475, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF34" }, { "start": 504, "end": 532, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF27" }, { "start": 858, "end": 877, "text": "(Wang et al., 2018;", "ref_id": "BIBREF32" }, { "start": 878, "end": 895, "text": "Lan and Xu, 2018;", "ref_id": "BIBREF16" }, { "start": 896, "end": 916, "text": "Suhara et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In other text similarity settings, such as summary evaluation (Zhao et al., 2019) , caption evaluation (Zhang et al., 2020) and machine translation evaluation (Mathur et al., 2019) , embedding-based metrics out-perform lexical-based metrics, as demonstrated by its increased correlation with human judgment scores. However, embedding-based metrics have not yet been evaluated on opinion texts. The success of these embedding-based metrics in other types of text (such as news) cannot be guaranteed for opinion texts. This is because opinion text can be associated with a sentiment polarity. Opinion bearing words that are opposite in sentiment polarity are semantically related. Yet, many of these embedding-based metrics are often trained on semantic relatedness but not specifically sentiment polarity.", "cite_spans": [ { "start": 62, "end": 81, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF34" }, { "start": 103, "end": 123, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF33" }, { "start": 159, "end": 180, "text": "(Mathur et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We address the gap of lacking research on similarity for opinion-bearing texts in the literature with the following research questions: (1) how do humans evaluate similarity of two opinion sentences?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) how well do existing metrics capture similarity in a way similar to humans? and if not well, (3) how do we develop metrics to more effectively measure similarity for opinion sentences? We address the first question by conducting a crowdsourcing task that collects human annotations for the degree of similarity of two opinion sentences. 1 For the second research question, we examine the correlation of the text similarity metrics against our crowdsourced annotations. For the third question, we explore approaches to fine-tune embedding-based metrics for similarity of opinion texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make several contributions: (1) we collect and release a dataset of 1635 sentence pairs with similarity scores; (2) we show that annotators broadly agree on whether an opinion sentence pair is \"similar\" or \"different\"; (3) we demonstrate that text similarity metrics have weak correlation to human judgments of opinion similarity, and that they perform poorly with differing opinions in particular; (4) we conduct an analysis of differing opinions to characterize the limitations of such approaches when dealing with opinion sentences; and, (5) we propose to learn a metric for similarity of opinion texts by fine-tuning SBERT via weak supervision by review ratings. Our experiments show that the fine-tuned SBERT model outperforms existing metrics for distinguishing different opinions and for measuring similarity of opinion sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our research is related to text similarity metrics, which broadly include lexical-based metrics, embedding-based metrics and learned metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Lexical-based Metrics ROUGE (Lin, 2004 ) is a commonly used metric for opinion summary evaluation. It measures similarity between texts by counting the overlaps of n-grams. BLEU (Papineni et al., 2002) is the default metric for machine translation evaluation. Similar to ROUGE, it also relies on counting overlaps in n-grams. Such lexical matching methods face the same limitation in evaluating texts that are similar in meaning but expressed with different words (Ng and Abrecht, 2015; Shimanaka et al., 2018) . METEOR (Denkowski and Lavie, 2014) is proposed to relax the exact n-gram matching to allow matching words with its synonyms.", "cite_spans": [ { "start": 28, "end": 38, "text": "(Lin, 2004", "ref_id": "BIBREF17" }, { "start": 178, "end": 201, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF22" }, { "start": 464, "end": 486, "text": "(Ng and Abrecht, 2015;", "ref_id": "BIBREF21" }, { "start": 487, "end": 510, "text": "Shimanaka et al., 2018)", "ref_id": "BIBREF28" }, { "start": 520, "end": 547, "text": "(Denkowski and Lavie, 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Embedding-based Metrics Embedding-based metrics are proposed to overcome the limitations of lexical-based metrics (Zhelezniak et al., 2019; Clark et al., 2019; Zhang et al., 2020) . Word Mover's Distance (WMD) (Kusner et al., 2015) and MoverScore (Zhao et al., 2019) measure how similar two texts are by accumulating the distance between word embeddings and contextual embeddings, respectively. Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) is a sentence encoder that can be used with cosine similarity to capture similarity of meaning between sentences.", "cite_spans": [ { "start": 114, "end": 139, "text": "(Zhelezniak et al., 2019;", "ref_id": "BIBREF35" }, { "start": 140, "end": 159, "text": "Clark et al., 2019;", "ref_id": "BIBREF10" }, { "start": 160, "end": 179, "text": "Zhang et al., 2020)", "ref_id": "BIBREF33" }, { "start": 210, "end": 231, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF15" }, { "start": 247, "end": 266, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The objective of metric learning is to learn a task specific similarity measure. There are two broad approaches to metric learning. Supervised metric learning requires a training dataset for the task. For example, machine translation metrics learn to score machine translations against humans translations from previous machine translation datasets with human annotations (Shimanaka et al., 2018; Mathur et al., 2019) . Sentence similarity can be learnt using Siamese network of sentence encoders with the Manhattan distance with a semantic relatedness dataset (Mueller and Thyagarajan, 2016) . However, this approach to metric learning requires a labelled dataset which is not always available.", "cite_spans": [ { "start": 372, "end": 396, "text": "(Shimanaka et al., 2018;", "ref_id": "BIBREF28" }, { "start": 397, "end": 417, "text": "Mathur et al., 2019)", "ref_id": "BIBREF18" }, { "start": 561, "end": 592, "text": "(Mueller and Thyagarajan, 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Learning", "sec_num": null }, { "text": "Alternatively, metric learning by weak supervision uses related data to guide the learning. During the training phase, the training dataset can be differ-ent from the end task and even trained with a different training objective. To get the sentence similarity of a pair of sentences, a Siamese network is trained with Natural Language Inference datasets (SNLI and MNLI) using cross entropy loss (Reimers and Gurevych, 2019) . To learn the thematic similarity of sentences, the metric is trained on a Triplet network using triplets of sentences from Wikipedia sections (Ein Dor et al., 2018) .", "cite_spans": [ { "start": 396, "end": 424, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF27" }, { "start": 569, "end": 591, "text": "(Ein Dor et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Learning", "sec_num": null }, { "text": "To sum up our discussion, it is notable that the lexical-based metric ROUGE is still widely used in the literature for similarity of opinion texts in tasks like opinion summarization evaluation (Amplayo and Lapata, 2020; Bra\u017einskas et al., 2020a) . Although ROUGE correlates well with human judgments at the system level but it performs poorly at the summary text level (Bhandari et al., 2020) .", "cite_spans": [ { "start": 221, "end": 246, "text": "Bra\u017einskas et al., 2020a)", "ref_id": "BIBREF6" }, { "start": 370, "end": 393, "text": "(Bhandari et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Metric Learning", "sec_num": null }, { "text": "Our dataset is based on that of the SemEval 2016 Task 5: \"Aspect-Based Sentiment Analysis Subtask 1\" (Pontiki et al., 2016) . The SemEval datasets contain review sentences on laptops and restaurants in English. We selected two sentences from reviews on the same product or business to create a sentence pair, for which we collected human judgments of similarity. We constructed 1800 sentence pairs using sentences of at least 3 and at most 25 tokens. Although our dataset covers only two domains, both domains are either often used or closely related to available review datasets. Yelp reviews (Chu and Liu, 2019; Bra\u017einskas et al., 2020a) are often about restaurants and the Amazon reviews on electronics (Angelidis and Lapata, 2018; Bra\u017einskas et al., 2020a) are closely related to the laptop domain. We leave the investigation of more domains to future research.", "cite_spans": [ { "start": 101, "end": 123, "text": "(Pontiki et al., 2016)", "ref_id": "BIBREF24" }, { "start": 594, "end": 613, "text": "(Chu and Liu, 2019;", "ref_id": "BIBREF9" }, { "start": 614, "end": 639, "text": "Bra\u017einskas et al., 2020a)", "ref_id": "BIBREF6" }, { "start": 706, "end": 734, "text": "(Angelidis and Lapata, 2018;", "ref_id": "BIBREF3" }, { "start": 735, "end": 760, "text": "Bra\u017einskas et al., 2020a)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Annotations", "sec_num": "3.1" }, { "text": "To ensure that judgments were not trivially about different features of a product, we kept at least one aspect the same between sentences. In this way, annotations would depend on the expression of the appraisal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Annotations", "sec_num": "3.1" }, { "text": "Human judgments were collected using Amazon Mechanical Turk. 2 Only annotators with \"Mechanical Turk Masters Qualification\" were considered. Annotators were asked to rate the similarity in meaning of each pair of opinion sentences on a 5-level Likert scale, using methodology borrowed from the Semantic Textual Similarity task (STS) shared task (Cer et al., 2017) . In our annotation task, the scale ranged from 0 (\"completely different opinion\") to 4 (\"completely same opinion\"), with the middle value, 2, indicating a partial match. We processed the annotations based on three quality-based criteria: (1) Filter out annotators with low accuracy on quality control sentence pairs; (2) Identify and filter out anomalous annotators; and, (3) Require a minimum of three annotations per sentence pair after filtering out annotators.", "cite_spans": [ { "start": 345, "end": 363, "text": "(Cer et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Annotations", "sec_num": "3.1" }, { "text": "Statistics of our dataset are shown in Table 1 . The dataset includes 1635 sentence pairs from reviews on two domains. The inter-annotator agreement is measured using Krippendorff's alpha (reliability coefficient) for ordinal levels (Artstein and Poesio, 2008) , with coefficients of 0.541 and 0.624 for Laptop and Restaurant, respectively, indicating a moderate level of agreement. We investigate the appropriateness of the 5-point Likert scale by comparing the inter-annotator agreement at different levels of Likert scale. Interestingly, while the 5-point scale had the highest level of agreement, a 3-point scale (grouping 0 and 1 together, and 3 and 4 together) led to a similar level of agreement, as shown in Table 2 . Grouping the center three levels together led to worse agreement. When grouping levels into 2 bins (0,1 vs 2,3,4), agreement increased for the Restaurant data (\u03b1 = 0.665) but decreased for the laptop data (\u03b1 = 0.524). We refer to the two-level groupings as broadly different and broadly similar opinions. We draw on this 2-level distinction, also with moderate annotator agreement, later in this paper. Given the moderate level of agreement achieved, we argue that the human judges generally agreed on these similarity judgments. This also suggests that the 5-point scale we collected our annotations on is an appropriate choice.", "cite_spans": [ { "start": 233, "end": 260, "text": "(Artstein and Poesio, 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 39, "end": 46, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 716, "end": 723, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Analysis", "sec_num": "3.2" }, { "text": "We explored this agreement further by examining the relationship of these judgments to sentiment polarity. Our selection of sentence pairs were sampled with constraints on aspect but were unconstrained by sentiment, which is already annotated in the SemEval dataset. We grouped sentences pairs with the same polarity and contrasting polarity and examined how humans judged similarity of opinions. We present these results in Figure 1 . These violin plots of human scores of sentence pairs with \"same\" sentiment polarity is spread above level 2 and the violin plots of human scores of sentence pairs with \"different\" sentiment polarity is below level 2. This is consistent with what one would expect given an ordinal rating of similarity, and supports the use of a 5-point Likert scale.", "cite_spans": [], "ref_spans": [ { "start": 425, "end": 433, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analysis", "sec_num": "3.2" }, { "text": "The following baseline metrics are chosen for our investigation: (1) ROUGE variants, ROUGE-1, ROUGE-2 and ROUGE-L without stemming and without stopword removal, using ROUGE-2.0 (Ganesan, 2018); (2) SPICE (Anderson et al., 2016) , an image captioning evaluation metric that compares the scene graph of one text against the reference text; and, (3) WMD, using the implementation of WMD from Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) with the normalized 300-dimension word2vec trained on Google News. We follow Clark et al. (2019) to transform WMD scores to a similarity score using exp \u2212(W M D) ; (4) SBERT, using sentence-transformers library in python; and (5) MoverScore, using the authors' implementation. These metrics are representative of the different types of metrics. ROUGE is a lexical-based metric, SPICE is a metric that incorporates representations of sentence meaning, and WMD, SBERT and MoverScore are embedding-based metrics.", "cite_spans": [ { "start": 204, "end": 227, "text": "(Anderson et al., 2016)", "ref_id": "BIBREF2" }, { "start": 499, "end": 518, "text": "Clark et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "On Metrics and Human Judgments", "sec_num": "4" }, { "text": "Pearson and Kendall correlations are reported in Table 3 . Pearson correlation is often used for text similarity evaluation. However, Pearson correlation can be misleading because it is a measure of linear relationship, sensitive to outliers and requires the two variables to be approximately normally distributed (Reimers et al., 2016) . We therefore include the Kendall correlation, a non parametric correlation that is not limited to linear relationship, less sensitive to outliers and does not make any assumption about the distribution of variable. Amongst the baseline metrics, SBERT has the highest correlation but the correlation is still weak.", "cite_spans": [ { "start": 314, "end": 336, "text": "(Reimers et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 49, "end": 56, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "On Metrics and Human Judgments", "sec_num": "4" }, { "text": "We grouped the data using the binary split (broadly different and broadly similar) presented in Section 3.2 and calculate the correlations again, presenting these in Table 4 . We observe that correlations for broadly different are generally lower for baseline metrics (e.g., Pearson correlation ranging from \u22120.033 to 0.257 for Laptop) than comparable values for the broadly similar group. This suggests that the metrics tested have difficulty in determining difference in meaning of opinion sentences. judgments for sentence pairs of opposite sentiment polarity. We hypothesize that the performance of embedding-based metrics can be improved with sentiment polarity information. A straightforward approach to improving embedding-based metrics for opinion similarity is to use sentimentally trained word embeddings. WMD is based on the Word2Vec word embeddings trained on Google news. For opinion similarity, the sentiment specific word embeddings trained on tweets and the sentiment information associated with emoticons Tang et al. (2014) can be used. We call this baseline approach WMD-SSWE.", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 173, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Broadly Different and Broadly Similar", "sec_num": "4.1" }, { "text": "Motivated by the observation that BERT-based sentence embeddings has shown superior performance for measuring sentence similarity, we propose to learn a metric for opinion similarity based on SBERT. Our metric is a Siamese network of SBERT that takes in two sentences as inputs and outputs a similarity score. To overcome the problem of costly human annotated similarity score for training, we propose to train the metric through weak supervision based on review ratings. We call our model SOS (SBERT for Opinion Similarity).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broadly Different and Broadly Similar", "sec_num": "4.1" }, { "text": "Online reviews usually contain a review text and review rating. The review text contains opinion texts and the review rating provides an overall sentiment polarity of review text. For popular review platforms, the review rating typically spans a score of 1 to 5. A review rating of 1 is negative and 5 is positive. We can draw the connection that a review text associated with higher rating is positive. Similarly, a review text associated with lower rating is negative. In our work, we consider positive reviews to have a star rating of 4 and 5, while negative reviews to have a star rating of 1 and 2. We omit reviews of star ratings of 3. For the same product, review texts with the same sentiment polarity (both positive or both negative) are deemed to be similar and review texts with different sentiment polarity (one positive and one negative) are different. This forms the basis of creating the training datasets for fine-tuning our model. We explore both Siamese networks and Triplet networks for training the opinion similarity model. The Siamese network for fine-tuning SOS, SOS S , formulates opinion similarity as a classification task with a learning objective to classify a pair of sentences as similar or otherwise. The supervision is a binary signal that a pair is either similar or different. This approach is used by an unsupervised metric for a sentence similarity, where a Siamese network of SBERT is fine-tuned on SNLI and MNLI dataset which is a classification task Reimers and Gurevych (2019) . For this work, we create training, development and test datasets of review pairs. Each pair contain reviews from the same product. The pair is either similar (either both positive or both negative) or different (one positive and one negative). We also ensure that the dataset is balanced with similar and different pairs. The training objective is cross entropy loss.", "cite_spans": [ { "start": 1489, "end": 1516, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Broadly Different and Broadly Similar", "sec_num": "4.1" }, { "text": "The second variant of our metric is to fine-tune with a triplet network for SOS, SOS T . Each training instance is a triplet of an anchor example, positive example and negative example. The learning objective is triplet loss, which is to score the distance between anchor example and positive example to be smaller than the distance between anchor example and negative example by a margin. Each triplet is constructed from reviews for the same product. We randomly select a review to form the anchor, and randomly selected another review review that have the same sentiment polarity as the anchor example as the positive example. We then select another review with opposite sentiment polarity to the anchor example as the negative example. For this work, we create training, development and test datasets of review triplets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broadly Different and Broadly Similar", "sec_num": "4.1" }, { "text": "Both Siamese and Triplet networks are effective for metric learning, but SOS T performs better than SOS S as the training triplets provide context that helps modeling the similarity more effectively. This finding is consistent with the literature for Semantic Text Similarity (Hoffer and Ailon, 2015) .", "cite_spans": [ { "start": 276, "end": 300, "text": "(Hoffer and Ailon, 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Broadly Different and Broadly Similar", "sec_num": "4.1" }, { "text": "We examine the performance of our model variants SOS S and SOS T on measuring similarity of sentences. Specifically, we compare which training network, the Siamese or Triplet, is more appropriate to fine-tune our model for our task. Apart from comparing the networks, we also included variations in constructing the training pairs or triplets. We constructed pairs and triplets with the entire review text, first sentence of review text or random sentence of review text. We choose to include sentence variations because our task is at a sentence level therefore training examples at sentence level is an appropriate consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "We train four variants of SOS: (1) SOS-Siamese-PC (SOS S P C ) and SOS-Triplet-PC (SOS T P C )trained with reviews from Amazon PC dataset 3 ; and, (2) SOS-Siamese-Yelp (SOS S Y elp ) and SOS-Triplet-Yelp (SOS T Y elp )-trained with reviews from the Yelp Academic dataset 4 . These two review datasets are selected with the intention to train our models on domain related reviews. The models on Amazon PC reviews roughly parallel the Laptop dataset, and models on Yelp reviews roughly parallel the Restaurant dataset. We use python using the code from sentence-transformers library. We use SBERT (stsb-bert-base) model in our metric, and fine-tuned with 10% warm up steps, one epoch and a batch size of eight. We run each model three times and report the average correlation on our opinion similarity evaluation dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "The accuracy on the development datasets determines which models to choose. For SOS S models, SOS S P C and SOS S Y elp are both trained with 8000 training examples of entire reviews. Our best SOS T models are SOS T P C , fine-tuned with 1000 training triplets of entire reviews and margin of 1, and SOS T Y elp , fine-tuned with 3000 training triplets of entire reviews and a margin of seven.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "Out of the models we propose, the metric learning models consistently outperform the best 3 https://s3.amazonaws.com/ amazon-reviews-pds/tsv/index.txt 4 https://www.yelp.com/dataset Table 6 : Correlation of our metrics and human scores for broadly different or broadly similar groups. The highest correlation is in bold. SBERT is the best baseline metric. P: Pearson and K: Kendall. embedding-based model (SBERT) ( Table 5 ). Our best model for Restaurant is SOS T Y elp . Although SOS T Y elp have the highest Pearson correlation for Laptop, its Kendall correlation is comparable to SOS T P C . This suggests that training on Yelp reviews can be generalized to both Laptop and Restaurant opinions. Our models outperform SBERT even when not fine-tuned in a relevant domain. On the other hand, WMD-SSWE have poor correlation with human judgments.", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 6", "ref_id": null }, { "start": 415, "end": 422, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Main Results", "sec_num": "6.1" }, { "text": "Comparing SOS S and SOS T models, triplet trained models achieve higher correlation than the models trained with pairs. A possible explanation is that triplets capture context information that is beneficial to evaluate sentence pairs that are broadly similar. This result is consistent with the observation by Hoffer and Ailon (2015) .", "cite_spans": [ { "start": 310, "end": 333, "text": "Hoffer and Ailon (2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "6.1" }, { "text": "For broadly different sentence pairs, almost all our models have a higher correlation than SBERT. This result supports our hypothesis that sentiment polarity information is helpful for opinion similarity and our proposed weak supervision methods (training by pairs and triplets) are effective to learn sentiment polarity information. For broadly similar sentence pairs, SOS T models consistently improve both Pearson and Kendall correlation over SBERT. On the other hand, our SOS S models are not always better. This suggests that the triplet training is more appropriate for \"Broadly Similar\" pairs. Although the correlation on \"Broadly Different\" pairs have increased, it is still not as high as correlation on \"Broadly Similar\" pairs. We discuss this further in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "6.1" }, { "text": "Granularity of the text in the training examples can potentially affect the performance of the metric (Ein Dor et al., 2018) . We compare the performance of different models trained on training examples constructed by entire reviews, first sentence or random sentence.", "cite_spans": [ { "start": 102, "end": 124, "text": "(Ein Dor et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Granularity of Text", "sec_num": "6.2" }, { "text": "The Pearson correlation on the opinion similarity task of the SOS S models is plotted in Figure 2 and SOS T models in Figure 3 . We present the results for Pearson correlation as Kendall correlation exhibits a similar trend.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 118, "end": 126, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Granularity of Text", "sec_num": "6.2" }, { "text": "Overall, the best models on our opinion similarity dataset are trained on examples of entire reviews except for SOS S P C which is best with random sentence selection. We initially thought that the sentence examples will be more appropriate as our task is at a sentence level. However, our results show otherwise. One possible reason is that review text contains a mix of positive and negative opinions. Selecting the first sentence or a random sentence might not correspond the overall review rating resulting in a noisy training dataset which eventually reduced the effectiveness of training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Granularity of Text", "sec_num": "6.2" }, { "text": "We investigate \"How much do we fine-tune our model with weak supervision?\". Our best models are selected based on the accuracy on the development set. However, is optimizing on the development dataset a good strategy to obtain the model that performs best on the end task?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimizing on Development Dataset", "sec_num": "6.3" }, { "text": "We observe that optimizing the models on the development set does not lead to the highest correlation on the opinion similarity task (Table 7) . However, the performance of these models are not significantly differently (two-sided Permutation Test for paired data at 5%) from the model with highest Pearson correlation. Besides, all our selected models outperform the SBERT model. Hence, model selection based on the development dataset is a reasonable way to select our model for our task. ", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 142, "text": "(Table 7)", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Optimizing on Development Dataset", "sec_num": "6.3" }, { "text": "We examined possible reasons why metrics have difficulty assessing differences in opinion. Table 9 shows how many pairs were deemed similar (in the top quartile (Q1)) when in fact the average human rating indicated they were different. For SBERT, the metric correlating best with human judgments (from Table 3 ), 5-10% of differing opinion pairs show human judgments diametrically oppose to metric scores. SOS variants reduce the percentage of wrongly scored pairs to almost 0%. Table 8 presents examples for when automatic metrics are confused. Our analysis suggests three possible reasons for the weak correlation: sentence pairs that are opposite in sentiment polarity, implicit aspects, and implied opinions. To better understand the frequency of errors for our dataset, we sampled 100 sentence pairs from each domain and classified the challenges. Our annotations show that on average across both domains, 41% of the sentence pairs contain sentences that are opposite in polarity, 12% contain sentence pairs that contains implicit aspects and 10% contain implied opinions.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 9", "ref_id": "TABREF12" }, { "start": 302, "end": 309, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 479, "end": 486, "text": "Table 8", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Although our SOS models have higher correlation than SBERT for \"Broadly Different\" pairs, correlations are still not at the same level for \"Broadly Similar\" pairs. However, SOS models are still not able to do well for opinions that contain implicit aspects or implied opinions. This is a possible reason for the lower correlation in \"Broadly Different\" pairs. We leave addressing these two challenges to future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "7" }, { "text": "Our work have implications for the automatic evaluation of review summaries. ROUGE and its variants are the default automatic metrics for review summary evaluation (Bra\u017einskas et al., 2020a; Suhara et al., 2020; Amplayo et al., 2021) . However, ROUGE is shown to be ineffective at evaluating summary pairs with opposite sentiment polarity (Tay et al., 2019) . The sentiment agreement is recognized to be important dimension of a review summary and has been included in the human evaluation component of review summaries (Chu and Liu, 2019) . This calls for new automatic metrics that considers the agreement of sentiment polarity between summary pairs. Our work fits into this area because we demonstrated that our SOS metric captures the sentiment agreement at sentence level. Future work in this area is to extend the SOS metric to review summary evaluation.", "cite_spans": [ { "start": 164, "end": 190, "text": "(Bra\u017einskas et al., 2020a;", "ref_id": "BIBREF6" }, { "start": 191, "end": 211, "text": "Suhara et al., 2020;", "ref_id": "BIBREF29" }, { "start": 212, "end": 233, "text": "Amplayo et al., 2021)", "ref_id": "BIBREF0" }, { "start": 339, "end": 357, "text": "(Tay et al., 2019)", "ref_id": "BIBREF31" }, { "start": 520, "end": 539, "text": "(Chu and Liu, 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "While we focus on investigating text similarity for opinion sentences in this work, an equally interesting direction is to approach this from an inference perspective where one opinion sentence entails the other. We leave this line of investigation to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "We investigate how humans make similarity judgments over opinion sentences. We contribute a dataset of crowdsourced similarity judgments for opinion sentences. The agreement amongst annotators for judgments is moderate. We study the limitations of current text similarity methods when they are adopted for this task and our analysis show that this is likely due to the inability of current metrics to model differing opinions. By fine-tuning Siamese Sentence-BERT using weak supervision, we increase the Pearson correlation with human judgments to 0.606 and 0.794 on Laptop and Restaurant respectively of our opinion similarity dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "The data is available at https://github.com/ wenyi-tay/sos.git.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "SOS: Sentence-BERT for Opinion Similarity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their thorough and insightful comments. Wenyi is supported by an Australian Government Research Training Program Scholarship and a CSIRO Data61 Top-up Scholarship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised opinion summarization with content planning", "authors": [ { "first": "Reinald", "middle": [], "last": "Kim Amplayo", "suffix": "" }, { "first": "Stefanos", "middle": [], "last": "Angelidis", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "35", "issue": "", "pages": "12489--12497", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021. Unsupervised opinion sum- marization with content planning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12489-12497.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised opinion summarization with noising and denoising", "authors": [ { "first": "Reinald", "middle": [], "last": "Kim Amplayo", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1934--1945", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinald Kim Amplayo and Mirella Lapata. 2020. Un- supervised opinion summarization with noising and denoising. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1934-1945, Online.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Spice: Semantic propositional image caption evaluation", "authors": [ { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Basura", "middle": [], "last": "Fernando", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Gould", "suffix": "" } ], "year": 2016, "venue": "Computer Vision -ECCV 2016", "volume": "", "issue": "", "pages": "382--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic proposi- tional image caption evaluation. In Computer Vision -ECCV 2016, pages 382-398, Cham. Springer Inter- national Publishing.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised", "authors": [ { "first": "Stefanos", "middle": [], "last": "Angelidis", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3675--3686", "other_ids": { "DOI": [ "10.18653/v1/D18-1403" ] }, "num": null, "urls": [], "raw_text": "Stefanos Angelidis and Mirella Lapata. 2018. Sum- marizing Opinions: Aspect Extraction Meets Senti- ment Prediction and They Are Both Weakly Super- vised. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 3675-3686, Brussels, Belgium.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Survey article: Inter-coder agreement for computational linguistics", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "555--596", "other_ids": { "DOI": [ "10.1162/coli.07-034-R2" ] }, "num": null, "urls": [], "raw_text": "Ron Artstein and Massimo Poesio. 2008. Survey ar- ticle: Inter-coder agreement for computational lin- guistics. Computational Linguistics, 34(4):555- 596.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Reevaluating evaluation in text summarization", "authors": [ { "first": "Manik", "middle": [], "last": "Bhandari", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Narayan Gour", "suffix": "" }, { "first": "Atabak", "middle": [], "last": "Ashfaq", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "9347--9359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manik Bhandari, Pranav Narayan Gour, Atabak Ash- faq, Pengfei Liu, and Graham Neubig. 2020. Re- evaluating evaluation in text summarization. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 9347-9359, Online.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Few-shot learning for opinion summarization", "authors": [ { "first": "Arthur", "middle": [], "last": "Bra\u017einskas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4119--4135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Bra\u017einskas, Mirella Lapata, and Ivan Titov. 2020a. Few-shot learning for opinion summariza- tion. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 4119-4135, Online.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unsupervised opinion summarization as copycat-review generation", "authors": [ { "first": "Arthur", "middle": [], "last": "Bra\u017einskas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5151--5169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Bra\u017einskas, Mirella Lapata, and Ivan Titov. 2020b. Unsupervised opinion summarization as copycat-review generation. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics, pages 5151-5169, Online.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.18653/v1/S17-2001" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the International Workshop on Semantic Evaluation, pages 1-14, Vancouver, Canada.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "MeanSum: A neural model for unsupervised multi-document abstractive summarization", "authors": [ { "first": "Eric", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1223--1232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Chu and Peter Liu. 2019. MeanSum: A neural model for unsupervised multi-document abstractive summarization. In Proceedings of the International Conference on Machine Learning, pages 1223-1232, Long Beach, CA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sentence mover's similarity: Automatic evaluation for multi-sentence texts", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2748--2760", "other_ids": { "DOI": [ "10.18653/v1/P19-1264" ] }, "num": null, "urls": [], "raw_text": "Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. 2019. Sentence mover's similarity: Automatic eval- uation for multi-sentence texts. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics, pages 2748-2760, Florence, Italy.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Meteor Universal: Language Specific Translation Evaluation for Any Target Language", "authors": [ { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "376--380", "other_ids": { "DOI": [ "10.3115/v1/W14-3348" ] }, "num": null, "urls": [], "raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor Universal: Language Specific Translation Evalua- tion for Any Target Language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 376-380, Baltimore, MD.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning thematic similarity metric from article sections using triplet networks", "authors": [ { "first": "Yosi", "middle": [], "last": "Liat Ein Dor", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Mass", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Halfon", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Venezian", "suffix": "" }, { "first": "Ranit", "middle": [], "last": "Shnayderman", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Aharonov", "suffix": "" }, { "first": "", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "49--54", "other_ids": { "DOI": [ "10.18653/v1/P18-2009" ] }, "num": null, "urls": [], "raw_text": "Liat Ein Dor, Yosi Mass, Alon Halfon, Elad Venezian, Ilya Shnayderman, Ranit Aharonov, and Noam Slonim. 2018. Learning thematic similarity metric from article sections using triplet networks. In Pro- ceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 49-54, Melbourne, Australia.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "ROUGE 2.0: Updated and improved measures for evaluation of summarization tasks", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kavita Ganesan. 2018. ROUGE 2.0: Updated and improved measures for evaluation of summarization tasks. CoRR, abs/1803.01937.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep metric learning using triplet network", "authors": [ { "first": "Elad", "middle": [], "last": "Hoffer", "suffix": "" }, { "first": "Nir", "middle": [], "last": "Ailon", "suffix": "" } ], "year": 2015, "venue": "Similarity-Based Pattern Recognition", "volume": "", "issue": "", "pages": "84--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In Similarity-Based Pattern Recognition, pages 84-92, Cham. Springer Interna- tional Publishing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [ "J" ], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [ "I" ], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the In- ternational Conference on International Conference on Machine Learning -Volume 37, pages 957-966, Lille, France.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural network models for paraphrase identification, semantic textual similarity, natural language inference, and question answering", "authors": [ { "first": "Wuwei", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3890--3902", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wuwei Lan and Wei Xu. 2018. Neural network models for paraphrase identification, semantic textual simi- larity, natural language inference, and question an- swering. In Proceedings of the International Con- ference on Computational Linguistics, pages 3890- 3902, Santa Fe, NM.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Putting evaluation in context: Contextual embeddings improve machine translation evaluation", "authors": [ { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2799--2808", "other_ids": { "DOI": [ "10.18653/v1/P19-1269" ] }, "num": null, "urls": [], "raw_text": "Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting evaluation in context: Contextual embeddings improve machine translation evaluation. In Proceedings of the Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2799- 2808, Florence, Italy.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Siamese recurrent architectures for learning sentence similarity", "authors": [ { "first": "Jonas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Thyagarajan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2786--2792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In Proceedings of the AAAI Conference on Arti- ficial Intelligence, page 2786-2792, Phoenix, AZ.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Read what you need: Controllable aspect-based opinion summarization of tourist reviews", "authors": [ { "first": "Rajdeep", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Hari", "middle": [ "Chandana" ], "last": "Peruri", "suffix": "" }, { "first": "Uppada", "middle": [], "last": "Vishnu", "suffix": "" }, { "first": "Pawan", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Sourangshu", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "Niloy", "middle": [], "last": "Ganguly", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "1825--1828", "other_ids": { "DOI": [ "10.1145/3397271.3401269" ] }, "num": null, "urls": [], "raw_text": "Rajdeep Mukherjee, Hari Chandana Peruri, Uppada Vishnu, Pawan Goyal, Sourangshu Bhattacharya, and Niloy Ganguly. 2020. Read what you need: Controllable aspect-based opinion summarization of tourist reviews. In Proceedings of the International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, page 1825-1828, Vir- tual Event, China.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Better summarization evaluation with word embeddings for ROUGE", "authors": [ { "first": "Jun-Ping", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Viktoria", "middle": [], "last": "Abrecht", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1925--1930", "other_ids": { "DOI": [ "10.18653/v1/D15-1222" ] }, "num": null, "urls": [], "raw_text": "Jun-Ping Ng and Viktoria Abrecht. 2015. Better sum- marization evaluation with word embeddings for ROUGE. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, pages 1925-1930, Lisbon, Portugal.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics, pages 311-318, Philadelphia, PA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Author-aware aspect topic sentiment model to retrieve supporting opinions from reviews", "authors": [ { "first": "Lahari", "middle": [], "last": "Poddar", "suffix": "" }, { "first": "Wynne", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Mong Li", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "472--481", "other_ids": { "DOI": [ "10.18653/v1/D17-1049" ] }, "num": null, "urls": [], "raw_text": "Lahari Poddar, Wynne Hsu, and Mong Li Lee. 2017. Author-aware aspect topic sentiment model to re- trieve supporting opinions from reviews. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 472-481, Copenhagen, Denmark.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "SemEval-2016 task 5: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Haris", "middle": [], "last": "Papageorgiou", "suffix": "" }, { "first": "Ion", "middle": [], "last": "Androutsopoulos", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" }, { "first": "Al-", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Mahmoud", "middle": [], "last": "Smadi", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Al-Ayyoub", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Orph\u00e9e", "middle": [], "last": "Qin", "suffix": "" }, { "first": "V\u00e9ronique", "middle": [], "last": "De Clercq", "suffix": "" }, { "first": "Marianna", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Apidianaki", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Tannier", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Loukachevitch", "suffix": "" }, { "first": "", "middle": [], "last": "Kotelnikov", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "19--30", "other_ids": { "DOI": [ "10.18653/v1/S16-1002" ] }, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. SemEval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the International Workshop on Semantic Evaluation, pages 19-30, San Diego, CA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC Workshop on New Chal- lenges for NLP Frameworks, pages 45-50, Valletta, Malta.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Task-oriented intrinsic evaluation of semantic textual similarity", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Beyer", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "87--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers, Philip Beyer, and Iryna Gurevych. 2016. Task-oriented intrinsic evaluation of semantic tex- tual similarity. In Proceedings of the International Conference on Computational Linguistics: Techni- cal Papers, pages 87-96, Osaka, Japan.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "3982--3992", "other_ids": { "DOI": [ "10.18653/v1/D19-1410" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing and the International Joint Conference on Nat- ural Language Processing, pages 3982-3992, Hong Kong, China.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "RUSE: Regressor using sentence embeddings for automatic machine translation evaluation", "authors": [ { "first": "Hiroki", "middle": [], "last": "Shimanaka", "suffix": "" }, { "first": "Tomoyuki", "middle": [], "last": "Kajiwara", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "751--758", "other_ids": { "DOI": [ "10.18653/v1/W18-6456" ] }, "num": null, "urls": [], "raw_text": "Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. RUSE: Regressor using sentence embeddings for automatic machine translation eval- uation. In Proceedings of the Conference on Ma- chine Translation: Shared Task Papers, pages 751- 758, Belgium, Brussels.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "OpinionDigest: A simple framework for opinion summarization", "authors": [ { "first": "Yoshihiko", "middle": [], "last": "Suhara", "suffix": "" }, { "first": "Xiaolan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stefanos", "middle": [], "last": "Angelidis", "suffix": "" }, { "first": "Wang-Chiew", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5789--5798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshihiko Suhara, Xiaolan Wang, Stefanos Angelidis, and Wang-Chiew Tan. 2020. OpinionDigest: A simple framework for opinion summarization. In Proceedings of the Annual Meeting of the Associ- ation for Computational Linguistics, pages 5789- 5798, Online.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Learning sentiment-specific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1555--1565", "other_ids": { "DOI": [ "10.3115/v1/P14-1146" ] }, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1555-1565, Baltimore, MD.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Red-faced ROUGE: Examining the suitability of ROUGE for opinion summary evaluation", "authors": [ { "first": "Wenyi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sarvnaz", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Annual Workshop of the Australasian Language Technology Association", "volume": "", "issue": "", "pages": "52--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenyi Tay, Aditya Joshi, Xiuzhen Zhang, Sarv- naz Karimi, and Stephen Wan. 2019. Red-faced ROUGE: Examining the suitability of ROUGE for opinion summary evaluation. In Proceedings of the Annual Workshop of the Australasian Language Technology Association, pages 52-60, Sydney, Aus- tralia.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": { "DOI": [ "10.18653/v1/W18-5446" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations, Online.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "563--578", "other_ids": { "DOI": [ "10.18653/v1/D19-1053" ] }, "num": null, "urls": [], "raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the Conference on Empirical Methods in Natu- ral Language Processing and the International Joint Conference on Natural Language Processing, pages 563-578, Hong Kong, China.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Don't settle for average, go for the max: Fuzzy sets and max-pooled word vectors", "authors": [ { "first": "Vitalii", "middle": [], "last": "Zhelezniak", "suffix": "" }, { "first": "Aleksandar", "middle": [], "last": "Savkov", "suffix": "" }, { "first": "April", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Moramarco", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Flann", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hammerla", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vitalii Zhelezniak, Aleksandar Savkov, April Shen, Francesco Moramarco, Jack Flann, and Nils Y. Ham- merla. 2019. Don't settle for average, go for the max: Fuzzy sets and max-pooled word vectors. In International Conference on Learning Representa- tions, New Orleans, LA.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Violin plot of human score of sentence pair by domain and sentiment polarity match.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Pearson correlation of SOS S models on our dataset. The dashed line is SBERT.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Pearson correlation of SOS T models on our dataset. The dashed line is SBERT.", "type_str": "figure", "num": null }, "TABREF1": { "text": "Statistics on our annotated dataset. Krippendorff's alpha, average number of annotations and average variance of annotations, per pair for each domain.", "content": "
#Levels GroupingLaptop Restaurant
2(0,1) (2,3,4)0.5240.665
3(0,1) (2) (3,4) 0.5360.624
3(0) (1,2,3) (4) 0.2500.312
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "Agreement for different Likert scales.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": "Correlation of existing embedding metrics and human scores. The highest correlation is in bold.", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF6": { "text": "", "content": "
", "type_str": "table", "num": null, "html": null }, "TABREF8": { "text": "Correlation of our metrics and human scores. The highest correlation is in bold. SBERT is the best baseline metric. P: Pearson and K: Kendall.", "content": "
LaptopRestaurant
MetricPKPK
Broadly Different
SBERT0.1560.1080.1190.070
WMD-SSWE 0.2440.2980.0630.171
SOS S P C SOS S Y elp0.262 0.3890.176 0.2730.252 0.2320.196 0.121
SOS T P C SOS T Y elp0.249 0.396 0.299 0.275 0.173 0.181 0.268 0.184
Broadly Similar
SBERT0.3910.2720.3990.276
WMD-SSWE 0.1410.0590.1050.104
SOS S P C SOS S Y elp0.266 0.2840.215 0.2360.366 0.4540.323 0.365
SOS T P C SOS T Y elp0.478 0.346 0.462 0.399 0.305 0.529 0.401 0.333
", "type_str": "table", "num": null, "html": null }, "TABREF9": { "text": "Pearson correlation of our selected models and highest correlation amongst all models.", "content": "", "type_str": "table", "num": null, "html": null }, "TABREF10": { "text": "Rice is too dry, tuna wasn't so fresh either. S2: Hands down, the best tuna I have ever had.", "content": "
Sentence PairHuman SBERT SOS T P CSOS T Y elp Explanation
Restaurant
S1: Q4Q1Q2Q4Opposite sentiment
S1: It was absolutely amazing.Q4Q1Q3Q4Opposite sentiment and
S2: This place is unbelievably over-rated.Implicit aspect (S1)
S1: Worst Service I Ever Had.Q1Q4Q4Q3Implied opinion (S2)
S2: We waited over 30 minutes for our drinks and over
1 1/2 hours for our food.
Laptop
S1: You will not regret buying this computer!Q4Q1Q2Q4Opposite sentiment
S2: I can't believe people like these computers.
S1: This is very fast, high performance computer.Q1Q4Q3Q3Implicit aspect (S2) and
S2: It wakes in less than a second when I open the lid.Implied opinion (S2)
", "type_str": "table", "num": null, "html": null }, "TABREF11": { "text": "Examples of sentence pairs where SBERT scores are inconsistent with human score. We expect the metric scores to be in similar quartiles of human scores. SOS T P C and SOS S Y elp are able to score sentence pairs of opposite sentiment more correctly but not sentence pairs of implicit aspect and implied opinion.", "content": "
MetricLaptop Restaurant
Broadly Different Sentence Pairs 116227
SPICE0.0690.062
WMD0.1810.172
SBERT0.0520.097
MoverScore0.2500.198
SOS T P C SOS T Y elp0.000 0.0170.018 0.000
Broadly Similar Sentence Pairs505787
SPICE0.0000.000
WMD0.0280.044
SBERT0.0080.024
MoverScore0.0220.038
SOS T P C SOS T Y elp0.004 0.0120.013 0.005
", "type_str": "table", "num": null, "html": null }, "TABREF12": { "text": "", "content": "
: Proportion of sentence pairs that are broadly
different but scored in Q1 (Top 25%) and broadly simi-
lar but scored in Q4 (Bottom 25%) by metric scores.
", "type_str": "table", "num": null, "html": null } } } }