ACL-OCL / Base_JSON /prefixA /json /argmining /2021.argmining-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
87.7 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:17.346907Z"
},
"title": "Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders",
"authors": [
{
"first": "Jan",
"middle": [
"Heinrich"
],
"last": "Reimer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Martin Luther University Halle-Wittenberg",
"location": {
"country": "Germany"
}
},
"email": "jan.reimer@student.uni-halle.de"
},
{
"first": "Thi",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Martin Luther University Halle-Wittenberg",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Hanh",
"middle": [],
"last": "Luu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Martin Luther University Halle-Wittenberg",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Max",
"middle": [],
"last": "Henze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Martin Luther University Halle-Wittenberg",
"location": {
"country": "Germany"
}
},
"email": "max.henze@student.uni-halle.de"
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Martin Luther University Halle-Wittenberg",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Arguments influence our decisions in many places of our daily life (Bar-Haim et al., 2020a) . But with the increasingly larger amount of information found on the Web 1 and more effective argument mining, people often need to summarize arguments (Lawrence and Reed, 2019; Bar-Haim et al., 2020a) . Bar-Haim et al. (2020a) see matching key points to arguments as an intermediate step towards automatically generating argumentative summaries (Section 2). The ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis (Friedman et al., 2021) is the first task on key point matching, which is an important step towards summarizing arguments. By matching arguments with a pre-defined set of key points, an argumentative text can be summarized using the prevalence of the key points in it. Different approaches of matching argument key point 1 https://internetlivestats.com/ pairs, here called matchers, should be proposed and discussed. Given an argument and a key point, a matcher should return a real value between 0 and 1 which represents the extent to which the argument matches the key point. 2 For evaluating different argument key point matchers, the shared task organizers uses mean average precision evaluation as a metric (Friedman et al., 2021) to evaluate the approaches and publish the ArgKP-2021 benchmark dataset (Section 3) to compare matchers (Bar-Haim et al., 2020a) .",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
},
{
"start": 245,
"end": 270,
"text": "(Lawrence and Reed, 2019;",
"ref_id": "BIBREF10"
},
{
"start": 271,
"end": 294,
"text": "Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
},
{
"start": 297,
"end": 320,
"text": "Bar-Haim et al. (2020a)",
"ref_id": "BIBREF1"
},
{
"start": 536,
"end": 559,
"text": "(Friedman et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 1114,
"end": 1115,
"text": "2",
"ref_id": null
},
{
"start": 1248,
"end": 1271,
"text": "(Friedman et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 1376,
"end": 1400,
"text": "(Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pretrained language models like BERT and RoBERTa are nowadays becoming standard approaches to tackle various Natural Language Processing tasks (Devlin et al., 2019; Liu et al., 2019 ). Because of their extensive pretraining, often fine-tuning these language models with even a small task-specific dataset can achieve state-ofthe-art performance (Devlin et al., 2019) . As the ArgKP-2021 dataset (Bar-Haim et al., 2020a) used in the ArgMining 2021 shared task on Quantitative Summarization is relatively small (24 083 labelled pairs), we decide to fine-tune BERT and RoBERTa language models rather than train a neural classifier from scratch (Section 4).",
"cite_spans": [
{
"start": 143,
"end": 164,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 165,
"end": 181,
"text": "Liu et al., 2019",
"ref_id": null
},
{
"start": 345,
"end": 366,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 395,
"end": 419,
"text": "(Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contrasting this neural approach, we introduce a simple rule-based baseline matcher that compares preprocessed tokens of each argument to the tokens of each key point (Section 4). For the baseline, we compute token overlap after removing stop words, adding synonyms and antonyms, and stemming the tokens from both argument and key point using the NLTK toolkit (Bird and Loper, 2004) .",
"cite_spans": [
{
"start": 360,
"end": 382,
"text": "(Bird and Loper, 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our fine-tuned RoBERTa-Base matcher achieves a mean average precision score of up to 0.967 and ranks second in the shared task's leaderboard (Section 5). In a manual error analysis, we find that the imbalanced ArgKP-2021 dataset causes neural models to predict non-matching argument key point pairs more precisely than matching pairs (Section 6). We further observe a tendency that large length differences between arguments and key points can cause errors. To encourage researchers to train more robust argument key point matchers, we release our source code under a free license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similar tasks to key point analysis include clustering arguments (Reimers et al., 2019; Ajjour et al., 2019) , detecting similar arguments in a pairwise fashion (Misra et al., 2016) and matching arguments to generic-arguments (Naderi and Hirst, 2017) . Using points to summarize arguments were approached by Egan et al. (2016) on online discussion. Points were extracted by using the verbs and their syntactic arguments and are then clustered together to deliver a summary of the discussion.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Reimers et al., 2019;",
"ref_id": null
},
{
"start": 88,
"end": 108,
"text": "Ajjour et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 161,
"end": 181,
"text": "(Misra et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 226,
"end": 250,
"text": "(Naderi and Hirst, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 308,
"end": 326,
"text": "Egan et al. (2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Key point analysis is the task of matching a given argument with one or more pre-defined key points (Bar-Haim et al., 2020a) . To develop models for the task, Bar-Haim et al. (2020a) introduced a dataset (ArgKP-2021) which contains 24 093 argument key point pairs on 28 topics. Each argument and key point is labeled manually as match or no-match. The authors experimented with several unsupervised and supervised approaches to perform the task in a cross-topic experimental setting. BERT (Devlin et al., 2019) performed the best in their experiments by reaching an F1 score of 0.68.",
"cite_spans": [
{
"start": 100,
"end": 124,
"text": "(Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
},
{
"start": 159,
"end": 182,
"text": "Bar-Haim et al. (2020a)",
"ref_id": "BIBREF1"
},
{
"start": 489,
"end": 510,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In a later work, Bar-Haim et al. (2020b) develop a summarization approach for online discussions that uses key point analysis. The summrization approach takes as input a set of comments on a given topic and extracts a set of representative key points from them. The output of the summarization approach is the set of extracts key points together with the count of matched comments for each key point. In its essence, the summarization approach uses a matching model that gives a score for a given comment and key point or a pair of key points. For matching models, Bar-Haim et al. (2020b) compare different variants of BERT (Devlin et al., 2019) . Among the tested models, ALBERT (Lan et al., 2019) performed the best with an F1 score 0.809, but RoBERTa (Liu et al., 2019) were chosen for key point extraction at the end, which is 6 times faster than ALBERT and still achieves an F1 score of 0.773.",
"cite_spans": [
{
"start": 565,
"end": 588,
"text": "Bar-Haim et al. (2020b)",
"ref_id": "BIBREF2"
},
{
"start": 624,
"end": 645,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 680,
"end": 698,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 754,
"end": 772,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our approaches for the key point analysis are based on BERT and RoBERTa. BERT stands for Bidirectional Encoder Representations from Transformers and is an open-source bidirectional language representation model published by Google (Devlin et al., 2019) . BERT is pre-trained over unlabeled text to learn a language representation and can be fine-tuned on downstream tasks. During pre-training, BERT is trained on two unsupervised tasks: Masked Language Model and Next Structure Prediction. RoBERTa is an improved variant of BERT that is introduced by Facebook in 2019 (Liu et al., 2019) . Liu et al. (2019) modified BERT by using a larger training data size of 160GB of uncompressed text, more compute power, larger batch-training size, and optimized hyperparameters. In comparison to BERT, pre-training tasks for RoBERTa were done with full-length sentences and include only Masked Language Model while applying different masks in each training epoch (dynamic masking). RoBERTa outperforms BERT on all 9 GLUE tasks in the single-task setting and 4 out of 9 tasks in the ensembles setting (Wang et al., 2018; Liu et al., 2019) .",
"cite_spans": [
{
"start": 231,
"end": 252,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 568,
"end": 586,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 589,
"end": 606,
"text": "Liu et al. (2019)",
"ref_id": null
},
{
"start": 1089,
"end": 1108,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 1109,
"end": 1126,
"text": "Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The dataset used in the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis is the ArgKP-2021 dataset (Bar-Haim et al., 2020a) which consists of 24 083 argument and key point pairs labeled as matching/nonmatching. They all belong to one of 28 controversial topics, for example: \"Assisted suicide should be a criminal offence\". Every key point and argument pair is annotated with its stance towards the topic. The training split of the ArgKP-2021 dataset has 5 583 arguments belonging to 207 key points within 24 topics. This leaves the validation split with 932 arguments and 36 key points for 4 topics. Friedman et al. (2021) complement the ArgKP-2021 dataset's training and validation split with a test split that is used to evaluate submissions to the shared task. The test split contains 723 arguments with 33 key points from 3 topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Here, we do qualitative and quantative analyses of the ArgKP-2021 dataset. Table 1 shows examples of argument key point pairs from the ArgKP-2021 dataset (Bar-Haim et al., 2020a) . In pair A from Table 1 , the argument matches the given key point. ",
"cite_spans": [
{
"start": 154,
"end": 178,
"text": "(Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 196,
"end": 203,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Characteristics",
"sec_num": "3.1"
},
{
"text": "A child actors can be overworked and they can miss out on their education.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# Argument Key point",
"sec_num": null
},
{
"text": "Being a performer harms the child's education B as long as nuclear weapons exist, the entire world has to worry about nations deciding to fire them at another or terrorists getting hold of them and causing disaster Nuclear weapons can fall into the wrong hands C 'people reach their limit when it comes to their quality of life and should be able to end their suffering. this can be done with little or no suffering by assistance and the person is able to say good bye. Both sentences discuss children actors and their education. The word \"actors\" is not explicitly used in the key point but is semantically similar to the word \"performer\". Such lexical variation can be opposed by using WordNet (Miller, 1995) to find synonyms and antonyms.",
"cite_spans": [
{
"start": 696,
"end": 710,
"text": "(Miller, 1995)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# Argument Key point",
"sec_num": null
},
{
"text": "Pair B in Table 1 is a harder example, since the argument matches the key point but are expressed differently. The key point makes usage of \"wrong hands\" as figurative meaning for \"nations\" and \"terrorists\" from the argument. In comparison to pair A, the linguistic variation in pair B goes beyond finding synonyms and requires a deep understanding of the semantics of the argument and key point. Figure 1 shows the average length of the arguments and key points in the training and developments splits. As shown, the arguments in the ArgKP-2021 dataset are substantially longer than key points. In the training set, the average length of arguments is 109 characters. Compared to that, key points are on average only half as long (52 characters). In the validation set, the key points have an average length of 41 characters and therefore key points are shorter than those in the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 397,
"end": 405,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "# Argument Key point",
"sec_num": null
},
{
"text": "The average length of arguments remains almost the same at 108 characters. The proportion of arguments that are 67 characters longer than key points constitute 39 % of the training set and 44 % of the validation set. We can see that there are more short key points in the validation set. This length difference might be a challenge for the models in key point matching (Section 6). Pair C is an example of an argument and key point pair with a large length difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# Argument Key point",
"sec_num": null
},
{
"text": "All in all, we identify the following major difficulties in matching key points to arguments: semantically similar words, meaning understanding, and the length difference between the arguments and key points. In the following section, we approach the first two problems while developing our baseline and approaches. In Section 6, we analyze the errors made by our approaches with regard to the length difference between the arguments and key points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# Argument Key point",
"sec_num": null
},
{
"text": "To match key points to arguments, we propose two different approaches. First, we discuss a simple yet effective baseline measuring token overlap be-tween key points and arguments. Second, to improve upon this simple baseline, we introduce an approach based on BERT and RoBERTa (Devlin et al., 2019; Liu et al., 2019) . We fine-tune both language models in standard configuration with only minor changes highlighted below.",
"cite_spans": [
{
"start": 277,
"end": 298,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 299,
"end": 316,
"text": "Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "To be able to compare more sophisticated matchers, we first propose a very simple token overlap baseline using preprocessed tokens from each argument and key point, as parsed by the NLTK toolkit (Bird and Loper, 2004) . In general, key points summarize ideas of their matched arguments. Our intuition, therefore, is that certain words or tokens from an argument are also likely to be present in its matched key points. Rather than using completely new words for summarization of arguments, a human would tend to reuse important words from the argument. For example, in the argument and key point pair C from Table 1 both, the argument and key point, contain the token \"suffering\".",
"cite_spans": [
{
"start": 195,
"end": 217,
"text": "(Bird and Loper, 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "We can further increase the token overlap between arguments and key points by preprocessing their tokens as following: First, we remove stop words for reducing noise within all arguments. Initially, this can seem counterproductive because with fewer words the highest possible overlap would also decrease and therefore could lead to worse performance. However, a lot of arguments and key points contain functional words like \"the\", \"and\" or \"as\". Removing these results in sentences that contain more specific information and thus leads to less confusion with the token overlap matcher. As a second preprocessing step, we reduce tokens to their corresponding stems by applying stemming using the Snowball stemmer (Porter, 1980) . We expect the token overlap matcher to be able to generalize more when comparing tokens. For example, the words \"assistance\" and \"assisted\" from the above example (Table 1, C) are both stemmed to \"assist\" with the Snowball stemmer. Consequently, stemming creates an overlap between different forms of the same word and, for instance, increases the probability that an argument containing \"assistance\" is associated with a key point containing \"assisted\". Last, we increase generalization for token overlap even further by supplementing the set of tokens with synonyms and antonyms (Miller, 1995) . This step should also increase the chance of overlapping tokens.",
"cite_spans": [
{
"start": 713,
"end": 727,
"text": "(Porter, 1980)",
"ref_id": "BIBREF16"
},
{
"start": 1311,
"end": 1325,
"text": "(Miller, 1995)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 893,
"end": 905,
"text": "(Table 1, C)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "To compute the similarity between an argument and a key point, let tokens a be the set of preprocessed tokens from an argument a and tokens k the set of tokens from a key point k. We calculate the set of overlapping tokens like this:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "overlap a,k = {t : t \u2208 tokens a \u2227 t \u2208 tokens k } (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "The token overlap matcher returns matching scores based on the overlap size weighted against the minimum size of either the argument or the key point:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "score a,k = |overlap a,k | min{|tokens a |, |tokens k |} (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "That is, pairs with a higher proportion of tokens that appear both in the argument as well as in the key point are classified with a higher matching score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Overlap Baseline",
"sec_num": "4.1"
},
{
"text": "To improve upon this simple token overlap baseline, we fine-tune BERT and RoBERTa Transformer models for classifying argument key point matches (Devlin et al., 2019; Liu et al., 2019) . While BERT is pretrained on a very large document corpus (16GB of raw data), RoBERTa is pretrained on an even larger corpus (160GB). Thus RoBERTa models can be fine-tuned on higher end task performance (Liu et al., 2019) . We tokenize both the arguments and the key points with BERT's default WordPiece tokenizer and the resulting sequences are trimmed to 512 tokens for both models. We then fine-tune the BERT-Base and RoBERTa-Base variants in the standard sentence-pair regression setting using the Simple Transformers library. 3 The input to the models is formatted as [CLS] argument [SEP] key point [SEP] for BERT and <s> argument </s></s> key point </s> for RoBERTa respectively. For classification, we interpret the regression output value as the probability of an argument matching a key point. That is, the training labels are always 0 or 1, depending on whether the corresponding pair in the training set matches or not. Both model variants contain 12 hidden layers with a hidden size of 768 and 12 attention heads. We train each of the two models for one single epoch at a learning rate of \u03b7 = 2 \u2022 10 \u22125 . We use an AdamW optimizer with \u03b2 = (0.9, 0.999) and zero weight decay (Loshchilov and Hutter, 2019) . The optimizer is warmed up with a ratio of 6 % of the training data, and we fine-tune both models with the binary cross-entropy loss. We explore three ways of handling argument key point pairs in the training set with missing ground-truth label. In the first way, we remove those pairs completely from the traning dataset. In the second and third ways, we assume all the arguments and key points with missing labels to be either a match or no-match. By comparing the effectiveness of the models, we find that the first way lead to the best effectiveness on the validation set. Similarly, we experiment with textual data augmentation 4 (swapping synonyms, randomly omitting words) to increase the amount of training data, leading to no improvement on validation scores either. Thus, for the submitted model, we consider only training pairs that have an associated ground-truth label and do not oversample. We don't restrict the models output to the interval [0, 1]-like we did for the baseline-, as the shared task did not mention constraints on the score value that should be returned by a matcher.",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 166,
"end": 183,
"text": "Liu et al., 2019)",
"ref_id": null
},
{
"start": 388,
"end": 406,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 716,
"end": 717,
"text": "3",
"ref_id": null
},
{
"start": 789,
"end": 794,
"text": "[SEP]",
"ref_id": null
},
{
"start": 1372,
"end": 1401,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuning",
"sec_num": "4.2"
},
{
"text": "Submissions to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis are evaluated with respect to mean average precision (Friedman et al., 2021) . The organizers calculate the score by pairing each argument with the best matching key point according to the predicted matching probabilities. Within each topic-stance combination, only 50 % of the arguments with the highest predicted matching score are then considered for evaluation. The task organizers claim that this removal of 50 % of the pairs is necessary because some arguments do not match any of the key points, which would influence mean average precision negatively (Friedman et al., 2021) . For the remaining argument key point pairs in each topic-stance combination, the average precision is calculated and the final score is computed as the mean of all average precision scores. The task organizers consider two evaluation settings: strict and relaxed. Both settings are based on the ground-truth labels from the ArgKP-2021 dataset (Bar-Haim et al., 2020a) . The two evaluation settings are created to account for argument key point pairs in the ArgKP-2021 with undecided labels (i.e. not enough agreement between annotators). In the strict setting, the shared task organizers consider those undecided pairs as no-match. In the relaxed setting, however, the shared task organizers consider the undecided pairs as match (Friedman et al., 2021) . The mean average precision score is then calculated in the two settings based on the ground-truth labels and the derived labels for the undecided pairs. We stress that in this complex evaluation setup, the mean average precision score in the relaxed setting would favor assuming matches in case of model uncertainty. In comparison, in the strict setting mean average precision would favor assuming no-match between an argument and key point. However, we find that because only the most probable matching key point is being considered for evaluation, this effect is minor. The evaluation score in general favors matchers that can match a single key point for each argument with high precision. It is however not important if a matcher does predict non-matches with high certainty.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Friedman et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 658,
"end": 681,
"text": "(Friedman et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 1027,
"end": 1051,
"text": "(Bar-Haim et al., 2020a)",
"ref_id": "BIBREF1"
},
{
"start": 1414,
"end": 1437,
"text": "(Friedman et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In Table 2 , we report mean average precision in the strict and relaxed settings of the training, validation, and test set in the ArgKP-2021 dataset. We complement the mean average precision scores by adding precision and recall scores of the match label, both in the strict and relaxed setting. To calculate precision and recall, we label an argument and key point pair as match if their score is higher than 0.5 and as no-match otherwise. To aggregate results of the strict and relaxed settings, we also report the average score of the two variants. The reported scores should allow for automated and unbiased evaluation of our models and easier comparison with competitive approaches. We report all 27 scores for the token overlap baseline model as well as for the fine-tuned BERT-Base and RoBERTa-Base models. To make our results more comparable, we add a second baseline, where matches between arguments and key points of same topic and stance are predicted with uniform random probability. That random baseline represents a worst-case matcher and any weak matcher should exceed its evaluation scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.1"
},
{
"text": "The token overlap baseline achieves a mean average precision of 0.483 in the strict setting and 0.575 in the relaxed setting on the test set. Thus, it is nearly twice as good as a random matcher with respect to mean average precision. Even though this baseline has reasonably good scores on all datasets, we are concerned about the large discrepancies between its scores on the validation set and the train- Table 2 : Performance of the random and token overlap baseline, BERT-Base, and RoBERTa-Base models with respect to mean average precision (mAP), precision (P), and recall (R) of the match label. Precision and recall are calculated by deriving boolean labels from the matching scores with a threshold of 0.5 for all approaches. We report scores for the training, validation, and test set in the strict and relaxed label settings, as well as the averages of the two settings. The best result per set is highlighted bold. ing and test dataset. The rather simple baseline captures the similarity between an argument and a key point on the token level and might be sensative against more complicated parphrases. Both fine-tuned matchers outperform the baselines by a large margin. While the BERT-Base matcher achieves higher relaxed mean average precision on the training set than the RoBERTa-Base matcher, the RoBERTa-Base matcher is overall better than BERT, especially with strict labels. The RoBERTa-Base matcher achieves a mean average precision of 0.913 in the strict setting on the test set, and 0.967 in the relaxed setting. As these scores on the test set are nearly as high as on the training set, we argue that RoBERTa is a more robust language model and generalizes better than BERT. Also the RoBERTa-Base matcher performs better in terms of precision, while the BERT-Base matcher is better with respect to recall for all dataset splits in both the strict and relaxed settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 408,
"end": 415,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.1"
},
{
"text": "To find errors of the two trained matchers, BERT-Base and RoBERTa-Base, in Figure 2 we show histograms of predicted match scores with respect to ground-truth labels. Both matchers classify most pairs correctly, which can be seen because the his-togram spikes around 0 for the true no-match label and around 1 for the true match label. We also observe that predictions on the training set are closer to the true label than on the development set for both RoBERTa-Base and BERT-Base. Even though we expect any machine-learned matcher to perform better on training data than on validation data, we see this as a room for improvement with better generalization. We notice that in Figures 2a and 2c both approaches predict non-matching argument key point pairs better than matching key points. This effect is likely to occur because of the higher amount of non-matching pairs provided in the training dataset. Most arguments match with only a few or even just a single key point. But nonetheless each argument is compared to all other key points; hence, the underlying data to learn from is imbalanced (Barandela et al., 2004) . Even though experiments with using textual data augmentation or simple oversampling to balance the dataset were unsuccessful (Dietterich, 1995) , more advanced oversampling or undersampling approaches could possibly resolve this issue. We further identify that the predicted matching scores of BERT-Base are spread a bit more than scores from RoBERTa-Base.",
"cite_spans": [
{
"start": 1097,
"end": 1121,
"text": "(Barandela et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 1249,
"end": 1267,
"text": "(Dietterich, 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "In Figure 2b , we observe that the BERT-Base matcher falsely predicts certain non-matching pairs Table 3 . This argument which is in the training dataset has no matching key points. For this argument, the BERT-Base matcher has not learned well how to classify matches for that type of argument, and therefore predicts a neutral score of 0.48. However, the RoBERTa-Base matcher does not make that error. Both, the BERT-Base matcher and RoBERTa-Base falsely predict some argument key point pairs as no-match that are in fact labelled as a match. For example, it seems to be difficult to predict a match for the argument key point pair E from Table 3. The argument from the example is longer than most arguments and especially much longer than the key point (431 % more characters). It might be more challenging to reduce such longer arguments, that contain more complex information, to very compact key points. We confirm that observation by comparing the squared classification error with respect to the absolute difference between argument and key point lengths.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 12,
"text": "Figure 2b",
"ref_id": "FIGREF2"
},
{
"start": 97,
"end": 104,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6"
},
{
"text": "We approach the practical problem of matching arguments with short key points with the goal of summarizing arguments. Although our token overlap baseline approach is very simple, it achieves a mean average precision of up to 0.575 on the test set, nearly double the score of a random matcher. The baseline approach is straightforward to implement but can not eliminate the problem of context understanding. RoBERTa-Base and BERT-Base have achieved good performance, because they can overcome the context understanding challenge. Our fine-tuned RoBERTa-Base model also performed better than BERT-Base in this task and scores a mean average precision of up to 0.967. With strict ground truth labels it achieves a mean average precision score of 0.913 on the test set, which is the best score of the participating teams in the shared task. This again shows the importance of architecture, training objectives, and hyperparameter selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In Section 3, we observed that transformer models tend to misclassify argument key point pairs if the argument and key point largely differ in length. As an extension to our approach, we propose to com-bine transformer models with the overlap baseline in an ensemble. Another possible improvement are recent improvements in language models (Sun et al., 2021) . If a language model is even more robust than, for example, RoBERTa, we expect a fine-tuned matcher to outperform the RoBERTa-Base matcher as well.",
"cite_spans": [
{
"start": 340,
"end": 358,
"text": "(Sun et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7.1"
},
{
"text": "https://2021.argmining.org/shared_ task_ibm.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://simpletransformers.ai/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/makcedward/nlpaug",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modeling frames in argumentation",
"authors": [
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Milad",
"middle": [],
"last": "Alshomary",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2915--2925",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein. 2019. Modeling frames in argu- mentation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2915-2925.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "From arguments to key points: Towards automatic argument summarization",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "Lilach",
"middle": [],
"last": "Eden",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Lahav",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020",
"volume": "",
"issue": "",
"pages": "4029--4039",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.371"
]
},
"num": null,
"urls": [],
"raw_text": "Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav Kan- tor, Dan Lahav, and Noam Slonim. 2020a. From ar- guments to key points: Towards automatic argument summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 4029-4039. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Quantitative argument summarization and beyond: Crossdomain key point analysis",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Lilach",
"middle": [],
"last": "Eden",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Lahav",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "39--49",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.3"
]
},
"num": null,
"urls": [],
"raw_text": "Roy Bar-Haim, Yoav Kantor, Lilach Eden, Roni Fried- man, Dan Lahav, and Noam Slonim. 2020b. Quanti- tative argument summarization and beyond: Cross- domain key point analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, Novem- ber 16-20, 2020, pages 39-49. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The imbalanced training sample problem: Under or over sampling?",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Barandela",
"suffix": ""
},
{
"first": "Rosa",
"middle": [
"Maria"
],
"last": "Valdovinos",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"Salvador"
],
"last": "S\u00e1nchez",
"suffix": ""
},
{
"first": "Francesc",
"middle": [
"J"
],
"last": "Ferri",
"suffix": ""
}
],
"year": 2004,
"venue": "Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshops",
"volume": "3138",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-540-27868-9_88"
]
},
"num": null,
"urls": [],
"raw_text": "Ricardo Barandela, Rosa Maria Valdovinos, Jos\u00e9 Sal- vador S\u00e1nchez, and Francesc J. Ferri. 2004. The imbalanced training sample problem: Under or over sampling? In Structural, Syntactic, and Statisti- cal Pattern Recognition, Joint IAPR International Workshops, SSPR 2004 and SPR 2004, Lisbon, Portu- gal, August 18-20, 2004 Proceedings, volume 3138 of Lecture Notes in Computer Science, page 806. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "NLTK: the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird and Edward Loper. 2004. NLTK: the nat- ural language toolkit. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, July 21-26, 2004 - Poster and Demonstration. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Overfitting and undercomputing in machine learning",
"authors": [
{
"first": "Thomas",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1995,
"venue": "ACM Comput. Surv",
"volume": "27",
"issue": "3",
"pages": "326--327",
"other_ids": {
"DOI": [
"10.1145/212094.212114"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich. 1995. Overfitting and undercom- puting in machine learning. ACM Comput. Surv., 27(3):326-327.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Summarising the points made in online political debates",
"authors": [
{
"first": "Charlie",
"middle": [],
"last": "Egan",
"suffix": ""
},
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Wyner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on Argument Mining (ArgMining2016)",
"volume": "",
"issue": "",
"pages": "134--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charlie Egan, Advaith Siddharthan, and Adam Wyner. 2016. Summarising the points made in online politi- cal debates. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 134-143.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of kpa-2021 shared task: Key point based quantitative summarization",
"authors": [
{
"first": "Roni",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Dankin",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 8th Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roni Friedman, Lena Dankin, Yoav Katz, Yufang Hou, and Noam Slonim. 2021. Overview of kpa-2021 shared task: Key point based quantitative summa- rization. In Proceedings of the 8th Workshop on Ar- gumentation Mining. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Argument mining: A survey",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
}
],
"year": 2019,
"venue": "Comput. Linguistics",
"volume": "45",
"issue": "4",
"pages": "765--818",
"other_ids": {
"DOI": [
"10.1162/coli_a_00364"
]
},
"num": null,
"urls": [],
"raw_text": "John Lawrence and Chris Reed. 2019. Argument min- ing: A survey. Comput. Linguistics, 45(4):765-818.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. CoRR, abs/1711.05101.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Measuring the similarity of sentential arguments in dialogue",
"authors": [
{
"first": "Amita",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Ecker",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "276--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amita Misra, Brian Ecker, and Marilyn Walker. 2016. Measuring the similarity of sentential arguments in dialogue. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 276-287, Los Angeles. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Classifying Frames at the Sentence Level in News Articles",
"authors": [
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "536--542",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_070"
]
},
"num": null,
"urls": [],
"raw_text": "Nona Naderi and Graeme Hirst. 2017. Classifying Frames at the Sentence Level in News Articles. In Proceedings of Recent Advances in Natural Lan- guage Processing, pages 536-542.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "F",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin F Porter. 1980. An algorithm for suffix stripping. Program.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.09821"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. arXiv preprint arXiv:1906.09821.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Large-scale knowledge enhanced pre-training for language understanding and generation",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Siyu",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Junyuan",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Jiaxiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yanbin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Weixin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhihua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Weibao",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Jianzhong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Zhizhou",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, and Haifeng Wang. 2021. ERNIE 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. CoRR, abs/2107.02137.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/w18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Proceed- ings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018, pages 353-355. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Lengths in characters for arguments and key points from the training and development set.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Predictions with RoBERTa-Base on the validation set.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Histograms of predicted labels on the training and validation sets for argument key point pairs with the BERT-Base and RoBERTa-Base classifiers. For good classifiers, predicted labels should approximately equal the true label (0 or 1).",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "Examples of argument key point pairs from the ArgKP-2021 dataset(Bar-Haim et al., 2020a)",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Examples of argument key point pairs from the ArgKP-2021 dataset(Bar-Haim et al., 2020a) where the predicted score is off the ground truth label (True) with either the BERT-Base or RoBERTa-Base matcher. An example of such uncertain pair is the argument key point pair D from",
"num": null,
"content": "<table><tr><td># Argument</td><td>Key point</td><td colspan=\"3\">True BERT RoBERTa</td></tr><tr><td>D School uniforms can be less comfortable than</td><td>School uniforms</td><td>0</td><td>0.48</td><td>0.03</td></tr><tr><td>students' regular clothes.</td><td>are expensive</td><td/><td/><td/></tr><tr><td>E affirmative action discriminates the majority, pre-</td><td>Affirmative</td><td>1</td><td>-0.05</td><td>0.03</td></tr><tr><td>venting skilled workers from gaining employ-</td><td>action reduces</td><td/><td/><td/></tr><tr><td>ment over someone less qualified but considered</td><td>quality</td><td/><td/><td/></tr><tr><td>to be a member of a protected minority group.</td><td/><td/><td/><td/></tr></table>",
"html": null
}
}
}
}