{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:48.663641Z" }, "title": "Matching The Statements: A Simple and Accurate Model for Key Point Analysis", "authors": [ { "first": "Viet", "middle": [ "Hoang" ], "last": "Phan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hanoi University of Science and Technology", "location": { "addrLine": "{hoang.pv180086, long.nt180129, long.nd183583" } }, "email": "" }, { "first": "Tien", "middle": [], "last": "Long", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hanoi University of Science and Technology", "location": { "addrLine": "{hoang.pv180086, long.nt180129, long.nd183583" } }, "email": "" }, { "first": "Nguyen", "middle": [], "last": "Duc", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hanoi University of Science and Technology", "location": { "addrLine": "{hoang.pv180086, long.nt180129, long.nd183583" } }, "email": "" }, { "first": "Long", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hanoi University of Science and Technology", "location": { "addrLine": "{hoang.pv180086, long.nt180129, long.nd183583" } }, "email": "" }, { "first": "Ngoc", "middle": [ "Khanh" ], "last": "Doan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hanoi University of Science and Technology", "location": { "addrLine": "{hoang.pv180086, long.nt180129, long.nd183583" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Key Point Analysis (KPA) is one of the most essential tasks in building an Opinion Summarization system, which is capable of generating key points for a collection of arguments toward a particular topic. Furthermore, KPA allows quantifying the coverage of each summary by counting its matched arguments. With the aim of creating high-quality summaries, it is necessary to have an in-depth understanding of each individual argument as well as its universal semantic in a specified context. In this paper, we introduce a promising model, named Matching the Statements (MTS) that incorporates the discussed topic information into arguments/key points comprehension to fully understand their meanings, thus accurately performing ranking and retrieving best-match key points for an input argument. Our approach 1 has achieved the 4 th place in Track 1 of the Quantitative Summarization-Key Point Analysis Shared Task by IBM, yielding a competitive performance of 0.8956 (3 rd) and 0.9632 (7 th) strict and relaxed mean Average Precision, respectively.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Key Point Analysis (KPA) is one of the most essential tasks in building an Opinion Summarization system, which is capable of generating key points for a collection of arguments toward a particular topic. Furthermore, KPA allows quantifying the coverage of each summary by counting its matched arguments. With the aim of creating high-quality summaries, it is necessary to have an in-depth understanding of each individual argument as well as its universal semantic in a specified context. In this paper, we introduce a promising model, named Matching the Statements (MTS) that incorporates the discussed topic information into arguments/key points comprehension to fully understand their meanings, thus accurately performing ranking and retrieving best-match key points for an input argument. Our approach 1 has achieved the 4 th place in Track 1 of the Quantitative Summarization-Key Point Analysis Shared Task by IBM, yielding a competitive performance of 0.8956 (3 rd) and 0.9632 (7 th) strict and relaxed mean Average Precision, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Prior work in Opinion Summarization often followed the extractive strategy, which identifies the most representative pieces of information from the source text and copies them verbatim to serve as summaries (Angelidis and Lapata, 2018; Brazinskas et al., 2019) . Abstractive summarization is a less popular strategy compared to the previous one yet offers more coherent output texts. Approaches governed by this vein could generate new phrases, sentences or even paragraphs that may not appear in the input documents (Ganesan et al., 2010; Isonuma et al., 2021) . Both extractive and abstractive methods are the straightforward applications of multi-document summarization (Liu et al., 2018; Fabbri et al., 2019) , which has been an emerging research domain of natural language processing in recent years.", "cite_spans": [ { "start": 207, "end": 235, "text": "(Angelidis and Lapata, 2018;", "ref_id": "BIBREF0" }, { "start": 236, "end": 260, "text": "Brazinskas et al., 2019)", "ref_id": "BIBREF3" }, { "start": 517, "end": 539, "text": "(Ganesan et al., 2010;", "ref_id": "BIBREF11" }, { "start": 540, "end": 561, "text": "Isonuma et al., 2021)", "ref_id": "BIBREF18" }, { "start": 673, "end": 691, "text": "(Liu et al., 2018;", "ref_id": "BIBREF22" }, { "start": 692, "end": 712, "text": "Fabbri et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As is well known, in traditional multi-document summarization methods, the role of an individual or a subset of key points among the summaries is often neglected. To be more specific, Bar-Haim et al. (2020) posed a question regarding the summarized ability of a small group of key points, and to some extent answered that question on their own by developing baseline models that could produce a concise bullet-like summary for the crowd-contributed arguments. With a pre-defined list of summaries, this task is known as Key Point Matching (KPM). Figure 1 provides a simple illustration of the KPM problem, where the most relevant key points are retrieved for each given argument within a certain topic (i.e. context).", "cite_spans": [ { "start": 184, "end": 206, "text": "Bar-Haim et al. (2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 546, "end": 554, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inspired by the previous work that studied the problem of learning sentence representation (Cer et al., 2018; Reimers and Gurevych, 2019) and semantic similarity (Yan et al., 2021) , we propose Matching The Statements (MTS), which further takes the topic information into account and effectively utilizes such proper features to learn a high performance unified model. Our approach has benefited from the recent developments of pre-trained language models such as BERT (Devlin et al., 2018) , ALBERT (Lan et al., 2019) or RoBERTa (Liu et al., 2019) .", "cite_spans": [ { "start": 91, "end": 109, "text": "(Cer et al., 2018;", "ref_id": "BIBREF4" }, { "start": 110, "end": 137, "text": "Reimers and Gurevych, 2019)", "ref_id": "BIBREF25" }, { "start": 162, "end": 180, "text": "(Yan et al., 2021)", "ref_id": "BIBREF32" }, { "start": 469, "end": 490, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 500, "end": 518, "text": "(Lan et al., 2019)", "ref_id": "BIBREF20" }, { "start": 530, "end": 548, "text": "(Liu et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions in this paper could be depicted as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Firstly, we design a simple yet efficient network architecture to fuse the context into sentence-level representations. Instead of letting the model infer the implicit reasoning structure, we provide our model with the information on whether an argument or key point (which are collectively referred to as statements in the remainder of this paper) supports its main topic or not. We should let everyone retire when they are ready Figure 1 : Overview of the Key Point Matching workflow in the Quantitative Summarization -Key Point Analysis Shared Task Track 1. From the information retrieval perspective, this task is to identify the most salient point that reinforces a given query.", "cite_spans": [], "ref_spans": [ { "start": 433, "end": 441, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Secondly, our method adopts the pseudolabels mechanism (Ge et al., 2020; , where we label arguments that belong to the same key point (and the key point itself) by the same index. The goal is to learn an embedding space in which the embedded vectors of mutual supportive statement pairs (i.e. having the same label) are pulled closer whereas unrelated ones are pushed apart.", "cite_spans": [ { "start": 57, "end": 74, "text": "(Ge et al., 2020;", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Finally, we validate the proposed MTS on the ArgKP-2021 (Bar-Haim et al., 2020) dataset in a variety of protocols. Extensive experiment results show that our proposed method strongly outperforms other baselines without using external data, thus becoming a potential method for the Key Point Matching problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized in the following way: Section 2 briefly reviews the related work, while section 3 formulates the KPM problem. Next, we describe our methodology in section 4, followed by the experimental results in section 5. Finally, section 6 will conclude our work and discuss future directions for further improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A standard approach for key points and arguments analysis is properly extracting their meaningful semantics. Our model stems from recent literatures that are based on siamese neural networks (Reimers and Gurevych, 2019; Gao et al., 2021) to measure the semantic similarity between documents. Even though, MTS has its own unique characteristics to incorporate context information.", "cite_spans": [ { "start": 191, "end": 219, "text": "(Reimers and Gurevych, 2019;", "ref_id": "BIBREF25" }, { "start": 220, "end": 237, "text": "Gao et al., 2021)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The representation of sentences in a fixeddimensional vector space plays a crucial role in enhancing a model's performance on downstream tasks. Early methods relied on static word embeddings (Pennington et al., 2014; Bojanowski et al., 2017) , which encoded a sentence by directly averaging its word vectors or employing recurrent neural network (RNN) encoders (Conneau et al., 2017) and taking the pooled output from the hidden units. Despite the fact that these methods can leverage both syntactic and semantic features, they often fail to retain the contextual information or suffer from slow training (due to the sequential nature of RNNs).", "cite_spans": [ { "start": 191, "end": 216, "text": "(Pennington et al., 2014;", "ref_id": "BIBREF24" }, { "start": 217, "end": 241, "text": "Bojanowski et al., 2017)", "ref_id": "BIBREF2" }, { "start": 361, "end": 383, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Embeddings", "sec_num": "2.1" }, { "text": "That is where BERT (Devlin et al., 2018) as well as its variants come in and dominate the modern NLP research. Training these architectures can exploit the parallel computational capacity of GPUs/TPUs hardware accelerators. In SBERT, Reimers and Gurevych (2019) proposed a sentence embedding method via fine-tuning BERT models on natural language inference (NLI) datasets. More recent studies in learning sentence representation followed the contrastive learning paradigm and achieved state-of-the-art performance on numerous of benchmark tasks (Liao, 2021; Yan et al., 2021) . ", "cite_spans": [ { "start": 19, "end": 40, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 234, "end": 261, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF25" }, { "start": 545, "end": 557, "text": "(Liao, 2021;", "ref_id": "BIBREF21" }, { "start": 558, "end": 575, "text": "Yan et al., 2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Embeddings", "sec_num": "2.1" }, { "text": "Semantic matching is a long-standing problem and has a wide range of applications, such as: questionanswering (Yang et al., 2015 ), text summarization (Zhong et al., 2020) and especially, information retrieval (Huang et al., 2013; Guo et al., 2016) . Jiang et al. (2019) introduced a hierarchical recurrent neural network that could capture long-term dependency and synthesize information from different granularities (i.e. words, sentences or paragraphs). Similarly, Yang et al. (2020) replaced the RNN backbones with transformer-based models and modified self-attention architectures to adapt with long document inputs.", "cite_spans": [ { "start": 110, "end": 128, "text": "(Yang et al., 2015", "ref_id": "BIBREF34" }, { "start": 151, "end": 171, "text": "(Zhong et al., 2020)", "ref_id": "BIBREF37" }, { "start": 210, "end": 230, "text": "(Huang et al., 2013;", "ref_id": "BIBREF16" }, { "start": 231, "end": 248, "text": "Guo et al., 2016)", "ref_id": "BIBREF14" }, { "start": 251, "end": 270, "text": "Jiang et al. (2019)", "ref_id": "BIBREF19" }, { "start": 468, "end": 486, "text": "Yang et al. (2020)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic matching", "sec_num": "2.2" }, { "text": "However, most of the existing work focuses only on assessing the similarity between pairs of sentences without paying attention to their contextwhich can help the reader to get an overview of the discussed topic. Recently, the ArgKP-2021 dataset has been published by Bar-Haim et al. (2020) , which consists of annotations about whether two statements and their stances towards a specific topic match or not. The next sections will provide an overview of this dataset and how our model is applicable in the Quantitative Summarization -Key Point Analysis Shared Task 2 .", "cite_spans": [ { "start": 268, "end": 290, "text": "Bar-Haim et al. (2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic matching", "sec_num": "2.2" }, { "text": "In this shared task, we are provided with a dataset consisting of 28 different topics. Each topic con-tains a set of associated arguments and key points in the form of matching (with label 1) or nonmatching (with label 0) pairs. The stances of these statements (whether a claim agrees or disagrees with its topic) are also exposed, we further evaluate the impact of this information in section 5.6.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3" }, { "text": "In short, the Key Point Matching problem is formulated as follows: Given a controversial topic T with a list of m arguments and n key points", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3" }, { "text": "A 1 , A 2 , . . . , A m ; K 1 , K 2 , . . . , K n , along with their corresponding stances S 1 , S 2 , . . . , S m+n (S i \u2208 {\u22121, 1})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3" }, { "text": ", which imply the attack or support relationships against the topic, our task is to rank key points that have the same stance with an input argument by the matching score. This priority is dependent on both the topic and the semantic of statements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem definition", "sec_num": "3" }, { "text": "The proposed MTS architecture is graphically shown in figure 2. It takes four separate inputs: (i) discussed topic, (ii) first statement, (iii) second statement, and (iv) their stance toward the topic. The final output is the similarity score of the fed in statements with respect to the main context. In the remainder of this section, we would like to describe three main components of MTS: encoding, context integration and statement encoding layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "We observe that a small percentage of the arguments (4.71%) belong to two or more key points, while the rest are matched with at most one. For that reason, a straightforward idea is gathering arguments, which belong to the same key point, and label the clusters in order. In other words, each cluster is represented by a key point K i , contains K i and its matching arguments. Our clustering technique results in the fact that there are a small number of arguments that belong to multiple clusters. Arguments that do not match any of the key points are grouped into the NON-MATCH set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "4.1" }, { "text": "Intuitively, if two different arguments support the same key point, they tend to convey similar meanings and should be considered as a matching pair of statements. Conversely, statements from different clusters are considered non-match in our approach. This pseudo-label method thus utilizes the similar semantic of within-cluster documents and enhances the model robustness. In the remainder of this paper, those arguments that come from the same cluster are referred to as positive pairs, otherwise, they are negative pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "4.1" }, { "text": "During training, we use each key point and its matching/non-matching arguments (based on the annotation in the ArgKP-2021 dataset) in a minibatch. Moreover, we also sample a small proportion of the NON-MATCH arguments and merge them into the mini-batch. Specifically, all the NON-MATCH arguments are considered to come from different and novel clusters. Because the definition of positive/negative statement pairs is well-defined, we can easily compute the loss in each mini-batch with a usual metric learning loss (Chopra et al., 2005; Yu and Tao, 2019) .", "cite_spans": [ { "start": 515, "end": 536, "text": "(Chopra et al., 2005;", "ref_id": "BIBREF5" }, { "start": 537, "end": 554, "text": "Yu and Tao, 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "4.1" }, { "text": "We first extract the contextualized representation for textual inputs using the RoBERTa (Liu et al., 2019) model. We adopt a canonical method (Sun et al., 2019) to achieve the final embedding of a given input, which is concatenating the last four hidden states of the [CLS] token. These embeddings are fed into the context integration layer as an aggregate representation for topics, arguments and key points. For example, a statement vector at this point is denoted as 3 :", "cite_spans": [ { "start": 88, "end": 106, "text": "(Liu et al., 2019)", "ref_id": "BIBREF23" }, { "start": 142, "end": 160, "text": "(Sun et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Encoding layer", "sec_num": "4.2" }, { "text": "h X = [ h X 1 , h X 2 , . . . , h X 4\u00d7768 ] (h X i \u2208 R) = [ h X 1 , h X 2 , . . . , h X 3072 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding layer", "sec_num": "4.2" }, { "text": "with 768 is the number of hidden layers produced by the RoBERTa-base model. For the stance encoding, we employ a fullyconnected network with no activation function to map the scalar input to a N -dimensional vector space. The representation of each topic, statement and stance are denoted as h T , h X and h S , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding layer", "sec_num": "4.2" }, { "text": "After using the RoBERTa backbone and a shallow neural network to extract the embeddings acquired from multiple inputs, we conduct a simple concatenation with the aim of incorporating the topic (i.e. context) and stance information into its argument/key point representations. After this step, the obtained vector for each statement is ([; ] notation indicates the concatenation operator):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context integration layer", "sec_num": "4.3" }, { "text": "v X = [h S ; h T ; h X ] where v X \u2208 R N +2\u00d73072", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context integration layer", "sec_num": "4.3" }, { "text": "The statement encoding component has another fully-connected network on top of the context integration layer to get the final D-dimensional embeddings for key points or arguments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement encoding layer", "sec_num": "4.4" }, { "text": "e X = v X W + b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement encoding layer", "sec_num": "4.4" }, { "text": "where W \u2208 R (N +6144)\u00d7D and b \u2208 R D are the weight and bias parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement encoding layer", "sec_num": "4.4" }, { "text": "Concretely, training our model is equivalent to learning a function f (S, T, X) that maps the similar statements onto close points and dissimilar ones onto distant points in R (N +6144)\u00d7D .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statement encoding layer", "sec_num": "4.4" }, { "text": "In each iteration, we consider each input statement from the incoming mini-batch as an anchor document and sample positive/negative documents from within/inter clusters. For calculating the matching score between two statements, we compute the cosine distance of their embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "D cosine (e X 1 , e X 2 ) = 1 \u2212 cos(e X 1 , e X 2 ) (1) = 1 \u2212 e X 1 T e X 2 ||e X 1 || 2 ||e X 2 || 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "Empirical results show that cosine distance yields the best performance compared to Manhattan distance (||e X 1 \u2212 e X 2 || 1 ) and Euclidean distance (||e X 1 \u2212 e X 2 || 2 ). Hence, we use cosine as the default distance metric throughout our experiments. We also revisit several loss functions, such as contrastive loss (Chopra et al., 2005) , triplet loss (Dong and Shen, 2018) and tuplet margin loss (Yu and Tao, 2019) . Unlike previous work, Yu and Tao (2019) use another distance metric, which will be described below.", "cite_spans": [ { "start": 320, "end": 341, "text": "(Chopra et al., 2005)", "ref_id": "BIBREF5" }, { "start": 357, "end": 378, "text": "(Dong and Shen, 2018)", "ref_id": "BIBREF8" }, { "start": 402, "end": 420, "text": "(Yu and Tao, 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "Assume that a mini-batch consists of k + 1 samples {X a , X p , X n 1 , X n 2 , . . . , X n k\u22121 }, which satisfies the tuplet constraint: X p is a positive statement whereas X n i are X a 's negative statements w.r.t X a . Mathematically, the tuplet margin loss function is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "L tuplet = log(1 + k\u22121 i=1 e s(cos \u03b8an i \u2212cos (\u03b8ap\u2212\u03b2)) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "where \u03b8 ap is the angle between e Xa and e Xp ; \u03b8 an i is the angle between e Xa and e Xn i . \u03b2 is the margin hyper-parameter, which imposes the distance between negative pair to be larger than \u03b2. Finally, s acts like a scaling factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "Additionally, Yu and Tao (2019) also introduced the intra-pair variance loss, which was theoretically proven to mitigate the intra-pair variation and improve the generalizability. In MTS, we use a weighted combination of both tuplet margin and intra-pair variance as our loss function. The formulation of the latter one is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "L pos = E[(1 \u2212 ) E[cos \u03b8 ap ] \u2212 cos \u03b8 ap ] 2 + L neg = E[cos \u03b8 an \u2212 (1 + ) E[cos \u03b8 an ] 2 + L intra\u2212pair = L pos + L neg where [\u2022] + = max(0, \u2022).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "As pointed out by Hermans et al. (2017) ; Wu et al. 2017, training these siamese neural networks raises a couple of issues regarding easy/uninformative examples bias. In fact, if we keep feeding random pairs, more easy ones are included and prevent models from training. Hence, a hard mining strategy becomes crucial for avoiding learning from such redundant pairs. In MTS, we adapt the multi-similarity mining from Wang et al. (2019) , which identifies a sample's hard pairs using its neighbors.", "cite_spans": [ { "start": 18, "end": 39, "text": "Hermans et al. (2017)", "ref_id": "BIBREF15" }, { "start": 416, "end": 434, "text": "Wang et al. (2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "Given a pre-defined threshold , we select the negative pairs if they have the cosine similarity greater than the hardest positive pair, minus . For instance, let X a be a statement, which has its positive and negative sets of statements denoted by P Xa and N Xa , respectively. A negative pair of statements {X a , X n } is chosen if:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "cosine(e Xa , e Xn ) \u2265 min", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "X i \u2208P Xa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "cosine(e Xa , e X i ) \u2212 Such pairs are referred to as hard negative pairs, we carry out a similar process to form hard positive pairs. If a positive pair {X a , X p } is selected, then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "cosine(e Xa , e Xp ) \u2264 max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "X i \u2208N Xa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "cosine(e Xa , e X i ) +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.5" }, { "text": "At inference time, we pair up the arguments and key points that debate on a topic under the same stance. Afterward, we compute the matching score based on the angle between their embeddings. For instance, an argument A and key point K will have a matching score of:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.6" }, { "text": "score(e A , e K ) = 1 \u2212 D cosine (e A , e K ) = cos(e A , e K )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.6" }, { "text": "The right-hand side function squashes the score into the probability interval of [0, 1) and compatible with the presented loss function in section 4.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "4.6" }, { "text": "To verify the effectiveness of the Matching The Statements model, we conduct extensive experiments on the ArgKP-2021 (Bar-Haim et al., 2020) dataset and compare the performance of MTS against baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "ArgKP-2021 (Bar-Haim et al., 2020) , the data set used in the Quantitative Summarization -Key Point Analysis Shared Task, is split into training and development sets with the ratio of 24 : 4. The training set is composed of 5583 arguments and 207 key points while those figures in the development set are 932 and 36. Each argument could be matched to one or more key points, yet the latter ones account for a small proportion of the data, as stated in section 4.1. The texts presented in ArgKP-2021 are relatively short, with approximately 18.22 \u00b1 7.76 by words or 108.20 \u00b1 43.51 by characters.", "cite_spans": [ { "start": 11, "end": 34, "text": "(Bar-Haim et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "ArgKP-2021 Dataset", "sec_num": "5.1" }, { "text": "For evaluation, only the most likely key point is chosen for each argument based on the predicted scores. These pairs are then sorted by their matching scores in descending order, and only the first half of them are included in the assessment. According to Friedman et al. (2021) ", "cite_spans": [ { "start": 257, "end": 279, "text": "Friedman et al. (2021)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation protocol", "sec_num": "5.2" }, { "text": "Figure 3 depicts the qualitative representation learning result of MTS before and after training. In the beginning, the similarity scores between matched/non-match key point-argument pairs are extremely high (\u2248 0.999). That means, almost all the statements are projected into a small region of the embedding space, and it is difficult to derive a cut-off threshold to get rid of the non-matching pairs. Therefore, the mean Average Precision scores when we directly use the untrained model with RoBERTa backbone are relatively low. Though, our training procedure improves the model's distinguishability and reduces the collapsed representation phenomenon. Indeed, the similarity scores at this point are stretched out and the mAP scores significantly increase (strict mAP 0.45 \u2192 0.84; relaxed mAP 0.62 \u2192 0.94).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings quality", "sec_num": "5.3" }, { "text": "For performance benchmarking, we implement two different baselines and their variations, namely Simple Argument-Key point matching (SimAKP) The \"T-\" prefix denotes the models that use triplet loss (Dong and Shen, 2018) while the rest are trained with the contrastive loss (Chopra et al., 2005) .", "cite_spans": [ { "start": 197, "end": 218, "text": "(Dong and Shen, 2018)", "ref_id": "BIBREF8" }, { "start": 272, "end": 293, "text": "(Chopra et al., 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.4" }, { "text": "and Question Answering-liked Argument-Key point matching (QA) models. We construct a sampling strategy in an online manner: in each minibatch, we select the hardest positive/negative pairs according to the method discussed in Section 4.5 to compute the loss. Simple Argument-Key point matching: The architecture of SimAKP is the same as MTS with the main difference in the data preparation. Instead of clustering similar statements, SimAKP simply performs pair-wise classification on the ArgKP-2021 dataset. Equivalently, each input to the SimAKP model consists of an argument-key point pair. This approach will not make use of the analogous nature of these claims that matched with the same key point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.4" }, { "text": "Question Answering-liked Argument-Key point matching: Inspired by the Question Answering, we format the arguments and key points fed to the RoBERTa model in order to incorporate the context into statements as below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.4" }, { "text": "[ In particular, obtained outputs of RoBERTa model with the above inputs are then concatenated with the stance representations to produce a tensor with shape (batch size, N + 3072), which is fed to a fully connected layer to embed the semantic meaning of each individual statement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.4" }, { "text": "To facilitate evaluation, we set up a 7-fold crossvalidation, each contains 24 topics for training and 4 topics for development. The train-dev split in Track 1 of Quantitative Summarization -Key Point Analysis Shared Task is replicated in fold 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "As can be seen in Figure 4 , we observe that our proposed MTS (we use triplet loss for a fair comparison) consistently outperforms other baselines in both mAP scores (higher is better). It achieved competitive scores on all splits, except fold 7. The reason is that the number of labeled argument-key point pairs of the development set in this part is the smallest among 7 folds, and there are substantial drops in terms of performance for all baselines. We also examine the impact of hard negative mining in Table 1 , the baselines are compared against themselves when using the hard mining strategy (i.e. avoid learning the embeddings of trivial samples). With the employment of hard mining, there is an improvement in performance for most baselines. Except for a small decrease in terms of relaxed mAP in SimAKP, both contrastive and triplet loss Simple Argument-Key point matching models have an average increase of 0.005% in mAP scores.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 509, "end": 516, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5.5" }, { "text": "To provide insight analysis in the setting of Matching The Statements, we therefore create four different setups: original MTS, MTS with batch normalization (Ioffe and Szegedy, 2015) immediately after context integration layer, MTS without mining strategy, and triplet-loss MTS. Although tuplet margin loss has an up/down weighting hard/easy samples mechanism, we find that MTS with multisimilarity mining (Wang et al., 2019) Figure 5 : Switching off different setups shows that each component of the original MTS's setting contributes to its performance. Figure 5 summarizes the average score for all setups. Overall, MTS performs similarly or better than its variants (without multi-similarity mining or adding a batch normalization layer). Replacing triplet loss with tuplet margin loss helps to boost both strict mAP and relaxed mAP by 0.2. Eventually, in an attempt to produce a consistent and accurate prediction on the test dataset, an ensemble of 4/7 best models from splits was used for final submission. As shown in Table 2 , among the performances of the top-10 team, our proposed model achieved the third position in terms of strict mAP, 7 th position in relaxed mAP and 4 th overall. ", "cite_spans": [ { "start": 157, "end": 182, "text": "(Ioffe and Szegedy, 2015)", "ref_id": "BIBREF17" }, { "start": 406, "end": 425, "text": "(Wang et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 426, "end": 434, "text": "Figure 5", "ref_id": null }, { "start": 556, "end": 564, "text": "Figure 5", "ref_id": null }, { "start": 1026, "end": 1033, "text": "Table 2", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Differential Analysis", "sec_num": "5.6" }, { "text": "Here, we showcase the benefit of taking the concatenation of the last four hidden state layers of the [CLS] token as the aggregate representation for the whole document. The first part of Table 3 is a clear proof for this advantage, using only the last hidden layer of [CLS] can hurt the overall performance. Likewise, the mean-pooling or summing up the token embeddings has worse results, compared to our method. To show the generality and applicability of our proposed model, we retain the MTS configuration when experimenting with other transformers-based backbones, such as: BERT (Devlin et al., 2018) , ALBERT (Lan et al., 2019) , DistilBERT (Sanh et al., 2019) , LUKE (Yamada et al., 2020) or MP-Net (Song et al., 2020) . According to the second part of Table 3 , among six pre-trained language models, MPNet yields a comparable result with RoBERTa (\u2248 0.84 & 0.94) while requiring 10% less number of parameters. We also note that, the increase in model size of Language Understanding with Knowledge-based Embeddings (LUKE) compared with RoBERTa results in unexpected performance reduction.", "cite_spans": [ { "start": 584, "end": 605, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 615, "end": 633, "text": "(Lan et al., 2019)", "ref_id": "BIBREF20" }, { "start": 647, "end": 666, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF26" }, { "start": 674, "end": 695, "text": "(Yamada et al., 2020)", "ref_id": "BIBREF31" }, { "start": 706, "end": 725, "text": "(Song et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 3", "ref_id": null }, { "start": 760, "end": 767, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "BERT embeddings", "sec_num": "5.6.1" }, { "text": "Up till now, we have almost finished the needed experiments to examine the effectiveness of our methodology. In this subsection, we further investigate the importance of the stance factor in building the MTS model by posing a question: \"How good is MTS when it has to predict the implicit relation between claims and topic\". Since the topic information is incorporated in encoding the statements, so perhaps it is sufficient to learn meaningful representations, without explicitly providing the stance information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance effect", "sec_num": "5.6.2" }, { "text": "By discarding the stance-involved components in MTS 2, the averaged result in 7 folds conceivably degrades to 0.741 \u00b1 0.094 in strict mAP but rises up to 0.952\u00b10.019 in relaxed mAP. This is because each argument now can be matched with key points that have different stances. According to this exploration, an open challenge for future research is finding a better way to comprehend statements within a topic (i.e. let the model infer the stance itself). For instance, one could consider employing the attention mechanism between a topic and its arguments and key points to characterize the relationship between them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance effect", "sec_num": "5.6.2" }, { "text": "In this paper, we present an efficient key point matching method based on supervised contrastive learning. We suppose that clustering the statements will be beneficial for the model training, and empirically verify this conclusion in the experiments. In addition, we found a simple and effective technique to encode these statements, and thus yields superior performance. In terms of model architecture, the components are carefully designed to ensure productivity. Results on Track 1 of Quantitative Summarization -Key Point Analysis show our method is a conceptually simple approach yet achieves promising performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The code is available at: https://github.com/ VietHoang1512/KPA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://2021.argmining.org/shared_ task_ibm.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For a consistent notation, statements and stances are denoted by uppercase letters: X and S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " Table 3 : Comparison between different embedding strategies and pre-trained language models. In this experiment, we report the result of the base version.", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised", "authors": [ { "first": "Stefanos", "middle": [], "last": "Angelidis", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3675--3686", "other_ids": { "DOI": [ "10.18653/v1/D18-1403" ] }, "num": null, "urls": [], "raw_text": "Stefanos Angelidis and Mirella Lapata. 2018. Sum- marizing opinions: Aspect extraction meets senti- ment prediction and they are both weakly super- vised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675-3686, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "From arguments to key points: Towards automatic argument summarization", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Lilach", "middle": [], "last": "Eden", "suffix": "" }, { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Kantor", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Lahav", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4029--4039", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.371" ] }, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kantor, Dan Lahav, and Noam Slonim. 2020. From arguments to key points: Towards automatic argu- ment summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4029-4039, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised multi-document opinion summarization as copycat-review generation", "authors": [ { "first": "Arthur", "middle": [], "last": "Brazinskas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02247" ] }, "num": null, "urls": [], "raw_text": "Arthur Brazinskas, Mirella Lapata, and Ivan Titov. 2019. Unsupervised multi-document opinion sum- marization as copycat-review generation. arXiv preprint arXiv:1911.02247.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Universal sentence encoder", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St John", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Guajardo-C\u00e9spedes", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11175" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning a similarity metric discriminatively, with application to face verification", "authors": [ { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Raia", "middle": [], "last": "Hadsell", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2005, "venue": "2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)", "volume": "1", "issue": "", "pages": "539--546", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Com- puter Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539-546. IEEE.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": { "DOI": [ "10.18653/v1/D17-1070" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Triplet loss in siamese network for object tracking", "authors": [ { "first": "Xingping", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jianbing", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European conference on computer vision (ECCV)", "volume": "", "issue": "", "pages": "459--474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xingping Dong and Jianbing Shen. 2018. Triplet loss in siamese network for object tracking. In Proceed- ings of the European conference on computer vision (ECCV), pages 459-474.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model", "authors": [ { "first": "Alexander", "middle": [], "last": "Fabbri", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tianwei", "middle": [], "last": "She", "suffix": "" }, { "first": "Suyi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1074--1084", "other_ids": { "DOI": [ "10.18653/v1/P19-1102" ] }, "num": null, "urls": [], "raw_text": "Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstrac- tive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1074-1084, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Overview of kpa-2021 shared task: Key point based quantitative summarization", "authors": [ { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Dankin", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Yufang", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 8th Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roni Friedman, Lena Dankin, Yoav Katz, Yufang Hou, and Noam Slonim. 2021. Overview of kpa-2021 shared task: Key point based quantitative summa- rization. In Proceedings of the 8th Workshop on Ar- gumentation Mining. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Opinosis: A graph based approach to abstractive summarization of highly redundant opinions", "authors": [ { "first": "Kavita", "middle": [], "last": "Ganesan", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "340--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstrac- tive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 340-348, Beijing, China. Coling 2010 Organizing Committee.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Simcse: Simple contrastive learning of sentence embeddings", "authors": [ { "first": "Tianyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xingcheng", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.08821" ] }, "num": null, "urls": [], "raw_text": "Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence em- beddings. arXiv preprint arXiv:2104.08821.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person reidentification", "authors": [ { "first": "Yixiao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Dapeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.01526" ] }, "num": null, "urls": [], "raw_text": "Yixiao Ge, Dapeng Chen, and Hongsheng Li. 2020. Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re- identification. arXiv preprint arXiv:2001.01526.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A deep relevance matching model for ad-hoc retrieval", "authors": [ { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yixing", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Qingyao", "middle": [], "last": "Ai", "suffix": "" }, { "first": "W Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th ACM international on conference on information and knowledge management", "volume": "", "issue": "", "pages": "55--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM in- ternational on conference on information and knowl- edge management, pages 55-64.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "defense of the triplet loss for person reidentification", "authors": [ { "first": "Alexander", "middle": [], "last": "Hermans", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Beyer", "suffix": "" }, { "first": "Bastian", "middle": [], "last": "Leibe", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.07737" ] }, "num": null, "urls": [], "raw_text": "Alexander Hermans, Lucas Beyer, and Bastian Leibe. 2017. In defense of the triplet loss for person re- identification. arXiv preprint arXiv:1703.07737.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning deep structured semantic models for web search using clickthrough data", "authors": [ { "first": "Po-Sen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Acero", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd ACM international conference on Information & Knowledge Management", "volume": "", "issue": "", "pages": "2333--2338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowl- edge Management, pages 2333-2338.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "authors": [ { "first": "Sergey", "middle": [], "last": "Ioffe", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2015, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "448--456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448-456. PMLR.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unsupervised abstractive opinion summarization by generating sentences with tree-structured topic guidance", "authors": [ { "first": "Masaru", "middle": [], "last": "Isonuma", "suffix": "" }, { "first": "Junichiro", "middle": [], "last": "Mori", "suffix": "" }, { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Ichiro", "middle": [], "last": "Sakata", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.08007" ] }, "num": null, "urls": [], "raw_text": "Masaru Isonuma, Junichiro Mori, Danushka Bollegala, and Ichiro Sakata. 2021. Unsupervised abstrac- tive opinion summarization by generating sentences with tree-structured topic guidance. arXiv preprint arXiv:2106.08007.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semantic text matching for long-form documents", "authors": [ { "first": "Jyun-Yu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mingyang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bendersky", "suffix": "" }, { "first": "Nadav", "middle": [], "last": "Golbandi", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Najork", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference", "volume": "", "issue": "", "pages": "795--806", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jyun-Yu Jiang, Mingyang Zhang, Cheng Li, Michael Bendersky, Nadav Golbandi, and Marc Najork. 2019. Semantic text matching for long-form docu- ments. In The World Wide Web Conference, pages 795-806.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentence embeddings using supervised contrastive learning", "authors": [ { "first": "Danqi", "middle": [], "last": "Liao", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.04791" ] }, "num": null, "urls": [], "raw_text": "Danqi Liao. 2021. Sentence embeddings using supervised contrastive learning. arXiv preprint arXiv:2106.04791.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Generating wikipedia by summarizing long sequences", "authors": [ { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Pot", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Sepassi", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1801.10198" ] }, "num": null, "urls": [], "raw_text": "Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3982--3992", "other_ids": { "DOI": [ "10.18653/v1/D19-1410" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01108" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Mpnet: Masked and permuted pretraining for language understanding", "authors": [ { "first": "Kaitao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.09297" ] }, "num": null, "urls": [], "raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2020. Mpnet: Masked and permuted pre- training for language understanding. arXiv preprint arXiv:2004.09297.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "How to fine-tune bert for text classification?", "authors": [ { "first": "Chi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2019, "venue": "China National Conference on Chinese Computational Linguistics", "volume": "", "issue": "", "pages": "194--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multi-similarity loss with general pair weighting for deep metric learning", "authors": [ { "first": "Xun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xintong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Weilin", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dengke", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Matthew R", "middle": [], "last": "Scott", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "5022--5030", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R Scott. 2019. Multi-similarity loss with general pair weighting for deep metric learn- ing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5022-5030.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Sampling matters in deep embedding learning", "authors": [ { "first": "", "middle": [], "last": "Chao-Yuan", "suffix": "" }, { "first": "R", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Manmatha", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Smola", "suffix": "" }, { "first": "", "middle": [], "last": "Krahenbuhl", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "2840--2848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. 2017. Sampling matters in deep embedding learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2840-2848.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Luke: deep contextualized entity representations with entity-aware self-attention", "authors": [ { "first": "Ikuya", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shindo", "suffix": "" }, { "first": "Hideaki", "middle": [], "last": "Takeda", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.01057" ] }, "num": null, "urls": [], "raw_text": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. Luke: deep con- textualized entity representations with entity-aware self-attention. arXiv preprint arXiv:2010.01057.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Consert: A contrastive framework for self-supervised sentence representation transfer", "authors": [ { "first": "Yuanmeng", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Rumei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sirui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fuzheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Weiran", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2105.11741" ] }, "num": null, "urls": [], "raw_text": "Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Con- sert: A contrastive framework for self-supervised sentence representation transfer. arXiv preprint arXiv:2105.11741.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Beyond 512 tokens: Siamese multi-depth transformer-based hierarchical encoder for long-form document matching", "authors": [ { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mingyang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bendersky", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Najork", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 29th ACM International Conference on Information & Knowledge Management", "volume": "", "issue": "", "pages": "1725--1734", "other_ids": { "DOI": [ "https://dl.acm.org/doi/pdf/10.1145/3340531.3411908" ] }, "num": null, "urls": [], "raw_text": "Liu Yang, Mingyang Zhang, Cheng Li, Michael Ben- dersky, and Marc Najork. 2020. Beyond 512 tokens: Siamese multi-depth transformer-based hierarchical encoder for long-form document matching. In Pro- ceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1725-1734.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "WikiQA: A challenge dataset for open-domain question answering", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Meek", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2013--2018", "other_ids": { "DOI": [ "10.18653/v1/D15-1237" ] }, "num": null, "urls": [], "raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- tion answering. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 2013-2018, Lisbon, Portugal. As- sociation for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Deep metric learning with tuplet margin loss", "authors": [ { "first": "Baosheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Dacheng", "middle": [], "last": "Tao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE/CVF International Conference on Computer Vision", "volume": "", "issue": "", "pages": "6490--6499", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baosheng Yu and Dacheng Tao. 2019. Deep metric learning with tuplet margin loss. In Proceedings of the IEEE/CVF International Conference on Com- puter Vision, pages 6490-6499.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Refining pseudo labels with clustering consensus over generations for unsupervised object reidentification", "authors": [ { "first": "Xiao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yixiao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "3436--3445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao Zhang, Yixiao Ge, Yu Qiao, and Hongsheng Li. 2021. Refining pseudo labels with clustering con- sensus over generations for unsupervised object re- identification. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recogni- tion, pages 3436-3445.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Extractive summarization as text matching", "authors": [ { "first": "Ming", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiran", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Danqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6197--6208", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.552" ] }, "num": null, "urls": [], "raw_text": "Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extrac- tive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197-6208, On- line. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The overall design of our Matching The Statements architecture.", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": ", there are two metrics used in Track 1, namely relaxed and strict mean Average Precision (mAP): Precision = True Positive True Positive + False Positive Since there are some argument-key point pairs in the ArgKP-2021 dataset that have indecisive annotations (i.e. their label is neither matched nor nonmatched): in the relaxed mAP evaluation, these pairs are considered as matched whereas strict mAP treats them as non-matched pairs.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Statement representation before (left) and after (right) training.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "Mean Average Precision scores over 7 folds.", "num": null, "type_str": "figure", "uris": null }, "TABREF4": { "num": null, "html": null, "text": "The effect of hard sample mining in baselines.", "content": "