{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:57.424320Z" }, "title": "Key Point Analysis via Contrastive Learning and Extractive Argument Summarization", "authors": [ { "first": "Milad", "middle": [], "last": "Alshomary", "suffix": "", "affiliation": { "laboratory": "", "institution": "Paderborn University", "location": { "settlement": "Germany" } }, "email": "milad.alshomary@upb.de" }, { "first": "Timon", "middle": [], "last": "Gurke", "suffix": "", "affiliation": { "laboratory": "", "institution": "Paderborn University", "location": { "settlement": "Germany" } }, "email": "" }, { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Philipp", "middle": [], "last": "Heinisch", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Maximilian", "middle": [], "last": "Splieth\u00f6ver", "suffix": "", "affiliation": { "laboratory": "", "institution": "Paderborn University", "location": { "settlement": "Germany" } }, "email": "" }, { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "", "affiliation": { "laboratory": "", "institution": "Paderborn University", "location": { "settlement": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Key point analysis is the task of extracting a set of concise and high-level statements from a given collection of arguments, representing the gist of these arguments. This paper presents our proposed approach to the Key Point Analysis shared task, collocated with the 8th Workshop on Argument Mining. The approach integrates two complementary components. One component employs contrastive learning via a siamese neural network for matching arguments to key points; the other is a graph-based extractive summarization model for generating key points. In both automatic and manual evaluation, our approach was ranked best among all submissions to the shared task.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Key point analysis is the task of extracting a set of concise and high-level statements from a given collection of arguments, representing the gist of these arguments. This paper presents our proposed approach to the Key Point Analysis shared task, collocated with the 8th Workshop on Argument Mining. The approach integrates two complementary components. One component employs contrastive learning via a siamese neural network for matching arguments to key points; the other is a graph-based extractive summarization model for generating key points. In both automatic and manual evaluation, our approach was ranked best among all submissions to the shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Informed decision-making on a controversial issue usually requires considering several pro and con arguments. To answer the question \"Is organic food healthier?\", for example, people may query a search engine that retrieves arguments from diverse sources such as news editorials, debate portals, and social media discussions, which can then be compared and weighed. However, given the constant stream of digital information, this process may be time-intensive and overwhelming. Search engines and similar support systems may therefore benefit from employing argument summarization, that is, the generated summaries may aid the decisionmaking by helping users quickly choose relevant arguments with a specific stance towards the topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Argument summarization has been tackled both for single documents and multiple documents (Bhatia et al., 2014; Egan et al., 2016) . A specific multi-document scenario introduced by Bar-Haim et al. (2020a) is key point analysis where the goal is to map a collection of arguments to a set of salient key points (say, high-level arguments) to provide a quantitative summary of these arguments.", "cite_spans": [ { "start": 89, "end": 110, "text": "(Bhatia et al., 2014;", "ref_id": "BIBREF4" }, { "start": 111, "end": 129, "text": "Egan et al., 2016)", "ref_id": "BIBREF7" }, { "start": 181, "end": 204, "text": "Bar-Haim et al. (2020a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Key Point Analysis (KPA) shared task by Friedman et al. (2021) 1 invited systems for two complementary subtasks: matching arguments to key points and generating key points from a given set of arguments (Section 3). As part of this shared task, we present an approach with two complementary components, one for each subtask. For key point matching, we propose a model that learns a semantic embedding space where instances that match are closer to each other while non-matching instances are further away from each other. We learn to embed instances by utilizing a contrastive loss function in a siamese neural network (Bromley et al., 1994) . For the key point generation, we present a graph-based extractive summarization approach similar to the work of Alshomary et al. (2020a) . It utilizes a PageRank variant to rank sentences in the input arguments by quality and predicts the top-ranked sentences to be key points. In an additional experiment, we also investigated an approach that performs aspect identification on arguments, followed by aspect clustering to ensure diversity. Finally, arguments with the best coverage of these diverse aspects are extracted as key points.", "cite_spans": [ { "start": 44, "end": 66, "text": "Friedman et al. (2021)", "ref_id": "BIBREF8" }, { "start": 622, "end": 644, "text": "(Bromley et al., 1994)", "ref_id": "BIBREF5" }, { "start": 759, "end": 783, "text": "Alshomary et al. (2020a)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approaches yielded the top performance among all submissions to the shared task in both quantitative and qualitative evaluation conducted by the organizers of the shared task (Section 5). 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summarization, arguments are relatively understudied compared to other document types such as news articles or scientific literature, but a few approaches have come up in the last years.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In an extractive manner, argument mining has been employed to identify the main claim as the summary of an argument (Petasis and Karkaletsis, 2016; Daxenberger et al., 2017) . Wang and Ling (2016) used a sequence-to-sequence model for the abstractive summarization of arguments from online debate portals. A complementary task of generating conclusions as informative argument summaries was introduced by Syed et al. (2021) . Similar to Alshomary et al. (2020b) who inferred a conclusion's target with a triplet neural network, we rely on contrastive learning here, using a siamese network though. Also, we build upon ideas of Alshomary et al. (2020a) who proposed a graph-based model using PageRank (Page et al., 1999) that extracts the argument's conclusion and the main supporting reason as an extractive summary. All these works represent the single-document summarization paradigm where only one argument is summarized at a time, whereas the given shared task is a multi-document summarization setting.", "cite_spans": [ { "start": 116, "end": 147, "text": "(Petasis and Karkaletsis, 2016;", "ref_id": "BIBREF15" }, { "start": 148, "end": 173, "text": "Daxenberger et al., 2017)", "ref_id": "BIBREF6" }, { "start": 176, "end": 196, "text": "Wang and Ling (2016)", "ref_id": "BIBREF21" }, { "start": 405, "end": 423, "text": "Syed et al. (2021)", "ref_id": "BIBREF19" }, { "start": 437, "end": 461, "text": "Alshomary et al. (2020b)", "ref_id": "BIBREF1" }, { "start": 700, "end": 719, "text": "(Page et al., 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The first approaches to multi-document argument summarization aimed to identify the main points of online discussions. Among these, Egan et al. (2016) grouped verb frames into pattern clusters that serve as input to a structured summarization pipeline, whereas Misra et al. (2016) proposed a more condensed approach by directly extracting argumentative sentences, summarized by similarity clustering. Bar-Haim et al. (2020a) continued this line of research by introducing the notion of key points and contributing the ArgsKP corpus, a collection of arguments mapped to manually-created key points. These key points are concise and selfcontained sentences that capture the gist of the arguments. Later, Bar-Haim et al. (2020b) proposed a quantitative argument summarization framework that automatically extracts key points from a set of arguments. Building upon this research, our approach aims to increase the quality of such generated key points, including a strong relation identifier between arguments and key points.", "cite_spans": [ { "start": 261, "end": 280, "text": "Misra et al. (2016)", "ref_id": "BIBREF13" }, { "start": 401, "end": 424, "text": "Bar-Haim et al. (2020a)", "ref_id": "BIBREF2" }, { "start": 702, "end": 725, "text": "Bar-Haim et al. (2020b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the context of computational argumentation, Bar-Haim et al. (2020a) introduced the notion of a key point as a high-level argument that resembles a natural language summary of a collection of more descriptive arguments. Specifically, the authors defined a good key point as being \"general enough to match a significant portion of the arguments, yet informative enough to make a useful summary.\" In this context, the KPA shared task consists of two subtasks as described below:", "cite_spans": [ { "start": 47, "end": 70, "text": "Bar-Haim et al. (2020a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "1. Key point matching. Given a set of arguments on a certain topic that are grouped by their stance and a set of key points, assign each argument to a key point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "2. Key point generation and matching. Given a set of arguments on a certain topic that are grouped by their stance, first generate five to ten key points summarizing the arguments. Then, match each argument in the set to the generated key points (as in the previous track).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "Data We start from the dataset provided by the organizers as described in Friedman et al. (2021) . The dataset contains 28 controversial topics, with 6515 arguments and a total of 243 key points. For each argument, its stance towards the topic as well as a quality score are given. Each topic is represented by at least three key points, with at least one key point per stance and at least three arguments matched to a key point. From the given arguments, 4.7% are unmatched, 67.5% belong to a single key point, and 5.0% belong to multiple key points. The remaining 22.8% of the arguments have ambiguous labels, meaning that the annotators could not agree on a correct matching to the key points. The final dataset contains 24,093 argument-key point pairs, of which 20.7% are labeled as matching. To develop our approach, we use the split as provided by the organizers with 24 topics for training, four topics for validation, and three topics for testing.", "cite_spans": [ { "start": 74, "end": 96, "text": "Friedman et al. (2021)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "Our approach consists of two components, each corresponding to one subtask of the KPA shared task. The first subtask of matching arguments to key points is modeled as a contrastive learning task using a siamese neural network. The second subtask requires generating key points for a collection of arguments and then matching them to the arguments. We investigated two models for this subtask: One is a graph-based extractive summarization model utilizing PageRank (Page et al., 1999) to extract sentences representing the key points; the other identifies aspects from the arguments and selects the most representative sentences that maximize the coverage of these aspects as the key points.", "cite_spans": [ { "start": 464, "end": 483, "text": "(Page et al., 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "Conceptually, we consider pairs of arguments and key points that are close to each other in a semantic embedding space as possible candidates for matching. Furthermore, we seek to transform this space key point kp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Key Point Matching", "sec_num": "4.1" }, { "text": "other kp' a 1 a 2 a 3 key point f(kp) other f(kp') f(a 1 ) f(a 2 ) f(a 3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Key Point Matching", "sec_num": "4.1" }, { "text": "Learned embedding space Figure 1 : We learn to transform an embedding space into a new space in which matching pairs of key point and argument (e.g., kp and a 1 ) are closer to each other, and the distance between non-matching pairs (e.g., kp \u2032 and a 1 ) is larger. For simplicity, kp and kp \u2032 each represent a concatenation of key point and topic.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Default embedding space", "sec_num": null }, { "text": "into a new embedding space where matching pairs are closer and the non-matching ones are more distant from each other (Figure 1 ). To do so, we utilize a siamese neural network with a contrastive loss function. Specifically, in the training phase, the input is a topic along with a key point, an argument, and a label (matching or not). First, we use a pretrained language model to encode the tokens of the argument as well as those of the concatenation of the topic and the key point. Then, we pass their embeddings through a siamese neural network, which is a mean-pooling layer that aggregates the token embeddings of each input, resulting in two sentencelevel embeddings. We compute the contrastive loss using these embeddings as follows:", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 127, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Default embedding space", "sec_num": null }, { "text": "L = \u2212y \u2022 log(\u0177) + (1 \u2212 y) \u2022 log(1 \u2212\u0177)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Default embedding space", "sec_num": null }, { "text": "where\u0177 is the cosine similarity of the embeddings, and y reflects whether a pair matches (1) or not (0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Default embedding space", "sec_num": null }, { "text": "Our primary model for key point generation is a graph-based extractive summarization model. Additionally, we also investigate clustering the aspects of the given collection of arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "Graph-based Summarization Following the work of Alshomary et al. (2020a), we first construct an undirected graph with the arguments' sentences as nodes. As a filtering step, we compute argument quality scores for each sentence as Toledo et al. (2019) and exclude low-quality arguments from the graph. Next, we employ our key point matching model (Section 4.1) to compute the edge weight between two nodes as the pairwise matching score of the corresponding sentences. Only nodes with a score above a defined threshold are connected via an edge. An example graph is sketched in Figure 2 . Finally, we use a variant of PageRank (Page et al., 1999) to compute an importance score P (s i ) for each sentence s i as follows:", "cite_spans": [ { "start": 230, "end": 250, "text": "Toledo et al. (2019)", "ref_id": "BIBREF20" }, { "start": 626, "end": 645, "text": "(Page et al., 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 577, "end": 585, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (si) = (1 \u2212 d) \u2022 s j \u0338 =s i match(si, sj) s k \u0338 =s j match(sj, s k ) P (sj) + d \u2022 qual(si) s k qual(s k )", "eq_num": "(1)" } ], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "where d is a damping factor that can be configured to bias the algorithm towards the argument quality score qual or the matching score match. To ensure diversity, we iterate through the ranked list of sentences (in descending order), adding a sentence to the final set of key points if its maximum matching score with the already selected candidates is below a certain threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "Aspect Clustering Extracting key points is conceptually similar to identifying aspects (Bar-Haim et al., 2020a) , which inspired our clustering approach that selects representative sentences from multiple aspect clusters as the final key points. We employ the tagger of Schiller et al. (2021) to extract the arguments' aspects (on average, 2.1 aspects per argument). To tackle the lack of diversity, we follow Heinisch and Cimiano (2021) and create k diverse aspect clusters by projecting the extracted aspect phrases to an embedding space. Next, we model the candidate selection of argument sentences as the set cover problem. Specifically, the", "cite_spans": [ { "start": 87, "end": 111, "text": "(Bar-Haim et al., 2020a)", "ref_id": "BIBREF2" }, { "start": 270, "end": 292, "text": "Schiller et al. (2021)", "ref_id": "BIBREF17" }, { "start": 410, "end": 437, "text": "Heinisch and Cimiano (2021)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "Approach R-1 R-2 R-L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "Graph-based Summarization 19.8 3.5 18.0 Aspect Clustering 18.9 4.7 17.1 Table 1 : ROUGE scores on the test set for our two approaches to key point generation.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "final set of key points summarizing the arguments for a given topic and stance maximizes the coverage of the set of arguments' aspects. To this end, we apply greedy approximation for selecting our candidates, where an argument sentence is chosen if it covers the maximum number of unique aspect clusters while having the smallest overlap with the clusters covered by the already selected candidates. Also, to avoid redundant key points, we compute its semantic similarity to the already selected candidates in each candidate selection step, and we add it to the final set if its score is below a certain threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "4.2" }, { "text": "In the following, we present implementation details of our two components, and we report on their quantitative and qualitative results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Evaluation", "sec_num": "5" }, { "text": "We employed RoBERTa-large (Liu and Lapata, 2019) for encoding the tokens of the two inputs of key point matching to the siamese neural network, which acts as a mean-pooling layer and projects the encoder outputs (matrix of token embeddings) into a sentence embedding of size 768. We used Sentence-BERT (Reimers and Gurevych, 2019) to train our model for 10 epochs, with batch size 32, and maximum input length of 70, leaving all other parameters to their defaults. For automatic evaluation, we computed both strict and relaxed mean Average Precision (mAP) following Friedman et al. (2021) . In cases where there is no majority label for matching, the relaxed mAP considers them to be a match while the strict mAP considers them as not matching. In the development phase, we trained our model on the training split and evaluated on the validation split provided by the organizers. The strict and relaxed mAP on the validation set were 0.84 and 0.96 respectively. For the final submission, we did a five-fold cross validation on the combined data (training and validation splits) creating an ensemble for the matching (as per the mean score).", "cite_spans": [ { "start": 26, "end": 48, "text": "(Liu and Lapata, 2019)", "ref_id": "BIBREF12" }, { "start": 302, "end": 330, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF16" }, { "start": 566, "end": 588, "text": "Friedman et al. (2021)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Key Point Matching", "sec_num": "5.1" }, { "text": "For the graph-based summarization model, we employed Spacy (Honnibal et al., 2020) to split the arguments into sentences. Similar to (Bar-Haim et al., 2020b) , only sentences with a minimum of 5 and a maximum of 20 tokens, and not starting with a pronoun, were used for building the graph. Argument quality scores for each sentence were obtained from Project Debater's API (Toledo et al., 2019) 3 . We selected the thresholds for the parameters d, qual and match in Equation 1 as 0.2, 0.8 and 0.4 respectively, optimizing for ROUGE (Lin, 2004) . In particular, we computed ROUGE-L between the ground-truth key points and the top 10 ranked sentences as our predictions, averaged over all the topic and stance combinations in the training split. We excluded sentences with a matching score higher than 0.8 with the selected candidates to minimize redundancy.", "cite_spans": [ { "start": 59, "end": 82, "text": "(Honnibal et al., 2020)", "ref_id": null }, { "start": 133, "end": 157, "text": "(Bar-Haim et al., 2020b)", "ref_id": "BIBREF3" }, { "start": 373, "end": 394, "text": "(Toledo et al., 2019)", "ref_id": "BIBREF20" }, { "start": 532, "end": 543, "text": "(Lin, 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "5.2" }, { "text": "For aspect clustering, we created 15 clusters per topic and stance combination. After greedy approximation of the candidate sentences, we removed redundant ones using a threshold of 0.65 for the normalized BERTScore (Zhang et al., 2020) with the previously selected candidates.", "cite_spans": [ { "start": 216, "end": 236, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Key Point Generation", "sec_num": "5.2" }, { "text": "Comparison of both approaches To select our primary approach for key point generation, we first performed an automatic evaluation of the aforementioned models on the test set using ROUGE (Table 1) . Additionally, we performed a manual evaluation via pairwise comparison of the extracted key points for both models for a given topic and stance.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 196, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Key Point Generation", "sec_num": "5.2" }, { "text": "Examples of key points from both the models are shown in Table 2 . The key points from graph-based summarization model are relatively longer. This also improves their informativeness, matching findings of Syed et al. (2021) . For the aspect clustering, we observe that the key points are more focused on specific aspects such as \"disease\" (for Pro) and \"effectiveness\" (for Con). In a real-world application, this may provide the flexibility to choose key points by aspects of interest to the end-user, especially with further improvement of aspect tagger by avoiding non-essential extracted phrases as \"mandatory\". Hence, given the task of generating a quantitative summary of a collection of arguments, we believe that the graph-based summary provides", "cite_spans": [ { "start": 205, "end": 223, "text": "Syed et al. (2021)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Key Point Generation", "sec_num": "5.2" }, { "text": "Stance Graph-based Summarization Aspect Clustering", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic", "sec_num": null }, { "text": "Routine child vaccinations should be mandatory", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic", "sec_num": null }, { "text": "(1) Child vaccinations should be mandatory to provide decent health care to all. (2) Vaccines help children grow up healthy and avoid dangerous diseases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro", "sec_num": null }, { "text": "(3) Child vaccinations should be mandatory so our children will be safe and protected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro", "sec_num": null }, { "text": "(1) Child vaccination is needed for children, they get sick too. (2) Routine child vaccinations should be mandatory to prevent the disease. (3) Yes as they protect children from life threatening and highly infectious diseases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro", "sec_num": null }, { "text": "Routine child vaccinations should be mandatory", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pro", "sec_num": null }, { "text": "(1) Vaccination should exclude children to avoid the side effects that can appear on them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Con", "sec_num": null }, { "text": "(2) Parents should have the freedom to decide what they consider best for their children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Con", "sec_num": null }, { "text": "(3) The child population has a low degree of vulnerability, so vaccination is not urgent yet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Con", "sec_num": null }, { "text": "(1) Child vaccination shouldn't be mandatory because the virus isn't effective in children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Con", "sec_num": null }, { "text": "(2) Child vaccinations should not be mandatory because vaccines are expensive. (3) It has not been 100% proven if the vaccine is effective. Table 3 : Final evaluation results of both tracks, comparing our approach (mspl) to the top two submitted approaches, along with Bar-Haim et al. (2020b) approach (bar_h). The generated key points were ranked in terms of how relevant (Rel.) and representative (Rep.) of the input arguments, as well as their polarity (Pol.) a more comprehensive overview and chose this as our preferred approach for key point generation.", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Con", "sec_num": null }, { "text": "In key point matching, our approach obtained a strict mAP of 0.789 and a relaxed mAP of 0.927 on the test set, the best result among all participating approaches. For the second track, in addition to evaluating the key point matching task, the shared task organizers manually evaluated the generated key points through a crowdsourcing study in which submitted approaches were ranked according to the quality of their generated key points. Table 3 presents the evaluation results of the top three submitted approaches, along with the reference approach of Bar-Haim et al. (2020b) . Among the submitted approaches, our approach was ranked the best in both the key point generation task as well as the key point matching task. For complete details on the evaluation, we refer to the task organizers' report (Friedman et al., 2021) .", "cite_spans": [ { "start": 556, "end": 579, "text": "Bar-Haim et al. (2020b)", "ref_id": "BIBREF3" }, { "start": 805, "end": 828, "text": "(Friedman et al., 2021)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 439, "end": 447, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Shared Task's Evaluation Results", "sec_num": "5.3" }, { "text": "This paper has presented a framework to tackle the key point analysis of arguments. For matching arguments to key points, we achieved the best performance in the KPA shared task via contrastive learning. For key point generation, we developed a graph-based extractive summarization model that output informative key points of high quality for a collection of arguments. We see abstractive key point generation as part of our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://2021.argmining.org/shared_task_ibm, last accessed: 2021-08-08 2 The code is available under https://github.com/webis-de/ ArgMining-21", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available under: https://early-access-program.debater.res. ibm.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extractive snippet generation for arguments", "authors": [ { "first": "Milad", "middle": [], "last": "Alshomary", "suffix": "" }, { "first": "Nick", "middle": [], "last": "D\u00fcsterhus", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event", "volume": "", "issue": "", "pages": "1969--1972", "other_ids": { "DOI": [ "10.1145/3397271.3401186" ] }, "num": null, "urls": [], "raw_text": "Milad Alshomary, Nick D\u00fcsterhus, and Henning Wachsmuth. 2020a. Extractive snippet generation for arguments. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1969-1972. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Target inference in argument conclusion generation", "authors": [ { "first": "Milad", "middle": [], "last": "Alshomary", "suffix": "" }, { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "2020", "issue": "", "pages": "4334--4345", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.399" ] }, "num": null, "urls": [], "raw_text": "Milad Alshomary, Shahbaz Syed, Martin Potthast, and Henning Wachsmuth. 2020b. Target inference in ar- gument conclusion generation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4334-4345. Association for Computa- tional Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "From arguments to key points: Towards automatic argument summarization", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Lilach", "middle": [], "last": "Eden", "suffix": "" }, { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Kantor", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Lahav", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4029--4039", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kan- tor, Dan Lahav, and Noam Slonim. 2020a. From arguments to key points: Towards automatic argu- ment summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4029-4039. Association for Com- putational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Quantitative argument summarization and beyond: Crossdomain key point analysis", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Kantor", "suffix": "" }, { "first": "Lilach", "middle": [], "last": "Eden", "suffix": "" }, { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Lahav", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "2020", "issue": "", "pages": "39--49", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.3" ] }, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Yoav Kantor, Lilach Eden, Roni Fried- man, Dan Lahav, and Noam Slonim. 2020b. Quanti- tative argument summarization and beyond: Cross- domain key point analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, Novem- ber 16-20, 2020, pages 39-49. Association for Com- putational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Summarizing online forum discussions -can dialog acts of individual messages help?", "authors": [ { "first": "Sumit", "middle": [], "last": "Bhatia", "suffix": "" }, { "first": "Prakhar", "middle": [], "last": "Biyani", "suffix": "" }, { "first": "Prasenjit", "middle": [], "last": "Mitra", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2127--2131", "other_ids": { "DOI": [ "10.3115/v1/d14-1226" ] }, "num": null, "urls": [], "raw_text": "Sumit Bhatia, Prakhar Biyani, and Prasenjit Mitra. 2014. Summarizing online forum discussions -can dialog acts of individual messages help? In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 2127-2131. ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Signature verification using a \"siamese\" time delay neural network", "authors": [ { "first": "Jane", "middle": [], "last": "Bromley", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Guyon", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "S\u00e4ckinger", "suffix": "" }, { "first": "Roopak", "middle": [], "last": "Shah", "suffix": "" } ], "year": 1994, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "737--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1994. Signature verifi- cation using a \"siamese\" time delay neural network. In Advances in neural information processing sys- tems, pages 737-744.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "What is the essence of a claim? cross-domain claim identification", "authors": [ { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2055--2066", "other_ids": { "DOI": [ "10.18653/v1/D17-1218" ] }, "num": null, "urls": [], "raw_text": "Johannes Daxenberger, Steffen Eger, Ivan Habernal, Christian Stab, and Iryna Gurevych. 2017. What is the essence of a claim? cross-domain claim identi- fication. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2055-2066, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Summarising the points made in online political debates", "authors": [ { "first": "Charlie", "middle": [], "last": "Egan", "suffix": "" }, { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" }, { "first": "Adam", "middle": [ "Z" ], "last": "Wyner", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Third Workshop on Argument Mining, hosted by the 54th Annual Meeting of the Association for Computational Linguistics, ArgMining@ACL 2016", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/w16-2816" ] }, "num": null, "urls": [], "raw_text": "Charlie Egan, Advaith Siddharthan, and Adam Z. Wyner. 2016. Summarising the points made in online politi- cal debates. In Proceedings of the Third Workshop on Argument Mining, hosted by the 54th Annual Meet- ing of the Association for Computational Linguistics, ArgMining@ACL 2016, August 12, Berlin, Germany. The Association for Computer Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Overview of KPA-2021 shared task: Key point based quantitative summarization", "authors": [ { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Dankin", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Yufang", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roni Friedman, Lena Dankin, Yoav Katz, Yufang Hou, and Noam Slonim. 2021. Overview of KPA-2021 shared task: Key point based quantitative summariza- tion.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A multitask approach to argument frame classification at variable granularity levels. it -Information Technology", "authors": [ { "first": "Philipp", "middle": [], "last": "Heinisch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" } ], "year": 2021, "venue": "", "volume": "63", "issue": "", "pages": "59--72", "other_ids": { "DOI": [ "10.1515/itit-2020-0054" ] }, "num": null, "urls": [], "raw_text": "Philipp Heinisch and Philipp Cimiano. 2021. A multi- task approach to argument frame classification at vari- able granularity levels. it -Information Technology, 63(1):59-72.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrialstrength Natural Language Processing in Python", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Montani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.1212303" ] }, "num": null, "urls": [], "raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial- strength Natural Language Processing in Python.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3730--3740", "other_ids": { "DOI": [ "10.18653/v1/D19-1387" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Measuring the similarity of sentential arguments in dialogue", "authors": [ { "first": "Amita", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Ecker", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "276--287", "other_ids": { "DOI": [ "10.18653/v1/W16-3636" ] }, "num": null, "urls": [], "raw_text": "Amita Misra, Brian Ecker, and Marilyn Walker. 2016. Measuring the similarity of sentential arguments in dialogue. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 276-287, Los Angeles. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The pageRank citation ranking: Bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pageRank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Identifying argument components through textrank", "authors": [ { "first": "Georgios", "middle": [], "last": "Petasis", "suffix": "" }, { "first": "Vangelis", "middle": [], "last": "Karkaletsis", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Third Workshop on Argument Mining, hosted by the 54th Annual Meeting of the Association for Computational Linguistics, ArgMining@ACL 2016", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgios Petasis and Vangelis Karkaletsis. 2016. Identi- fying argument components through textrank. In Pro- ceedings of the Third Workshop on Argument Mining, hosted by the 54th Annual Meeting of the Associa- tion for Computational Linguistics, ArgMining@ACL 2016, August 12, Berlin, Germany.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Aspect-controlled neural argument generation", "authors": [ { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021", "volume": "", "issue": "", "pages": "380--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Aspect-controlled neural argument generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 380-396. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "News editorials: Towards summarizing long argumentative texts", "authors": [ { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Roxanne", "middle": [ "El" ], "last": "Baff", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Kiesel", "suffix": "" }, { "first": "Khalid", "middle": [ "Al" ], "last": "Khatib", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "2020", "issue": "", "pages": "5384--5396", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.470" ] }, "num": null, "urls": [], "raw_text": "Shahbaz Syed, Roxanne El Baff, Johannes Kiesel, Khalid Al Khatib, Benno Stein, and Martin Potthast. 2020. News editorials: Towards summarizing long argumentative texts. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), Decem- ber 8-13, 2020, pages 5384-5396. International Com- mittee on Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Generating informative conclusions for argumentative texts", "authors": [ { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Khalid", "middle": [ "Al" ], "last": "Khatib", "suffix": "" }, { "first": "Milad", "middle": [], "last": "Alshomary", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shahbaz Syed, Khalid Al Khatib, Milad Alshomary, Henning Wachsmuth, and Martin Potthast. 2021. Generating informative conclusions for argumenta- tive texts. CoRR, abs/2106.01064.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Automatic argument quality assessment-new datasets and methods", "authors": [ { "first": "Assaf", "middle": [], "last": "Toledo", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Gretz", "suffix": "" }, { "first": "Edo", "middle": [], "last": "Cohen-Karlik", "suffix": "" }, { "first": "Roni", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Venezian", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Lahav", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Ranit", "middle": [], "last": "Aharonov", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5625--5635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019. Auto- matic argument quality assessment-new datasets and methods. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5625-5635.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Neural network-based abstract generation for opinions and arguments", "authors": [ { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "47--57", "other_ids": { "DOI": [ "10.18653/v1/N16-1007" ] }, "num": null, "urls": [], "raw_text": "Lu Wang and Wang Ling. 2016. Neural network-based abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 47-57, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "cause unwanted side effects Children should not be vaccinated because they can have serious side effects Does not need it as children have better immune systems Vaccination should exclude children to avoid the side effects that can appear on them Linking a measure as good as vaccination to coercive measures would cause serious harm Forcing people to have their children vaccinated goes against free will As long as vaccines are not free of side effects, it cannot make them mandatory for our children The child population has a low degree of vulnerability, so vaccination is not urgent yet I as a parent should decide Vaccination in the child population is not yet a vulnerable age so it is not a priority Parents should be allowed to choose if their child is vaccinated or not Parents should have the freedom to decide what they consider best for their children Let them decide if they want to be vaccinatedVaccination is an option, not everyone thinks they really are important and free will must be respected[...]", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "Example graph of our key point generation approach. Nodes with high saturation are considered to be key points (bold text). Nodes with dashed lines have lower argument quality. Edge thickness represents similarity between two nodes. Notice that the shown arguments do not reflect the view of the authors.", "type_str": "figure" }, "TABREF0": { "content": "
KP MatchingKP Generation
ApproachmAP/RankRel. Rep. Pol.
bar_h0.885/1211
mspl (ours)0.818/2212
sohanpat0.491/3442
peratham0.443/4134
", "text": "Examples of keypoints from our proposed approaches. Only the top three key points are shown for brevity.", "type_str": "table", "html": null, "num": null } } } }