{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:10.782562Z" }, "title": "Multi-task Learning in Argument Mining for Persuasive Online Discussions", "authors": [ { "first": "Nhat", "middle": [], "last": "Tran", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pittsburgh", "location": {} }, "email": "" }, { "first": "Diane", "middle": [], "last": "Litman", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pittsburgh", "location": {} }, "email": "dlitman@pitt.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration. Our models learn to identify argument components and the relations between them at the same time. We also tackle the low-precision which arises from imbalanced relation data by experimenting with SMOTE and XGBoost. Our approaches improve over baselines that use the same pre-trained language model but process the argument component task and two relation tasks separately. Furthermore, our results suggest that the tasks to be incorporated into multi-task learning should be taken into consideration as using all relevant tasks does not always lead to the best performance.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration. Our models learn to identify argument components and the relations between them at the same time. We also tackle the low-precision which arises from imbalanced relation data by experimenting with SMOTE and XGBoost. Our approaches improve over baselines that use the same pre-trained language model but process the argument component task and two relation tasks separately. Furthermore, our results suggest that the tasks to be incorporated into multi-task learning should be taken into consideration as using all relevant tasks does not always lead to the best performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Argument mining (AM) focuses on automatically identifying argumentative structures in text, and utilizing these structures in applications. AM tasks include identifying argument components (e.g., \"claim\") and relations between them (e.g., \"support\"). However, most AM studies have focused on monologues or micro-level models of arguments (Peldszus and Stede, 2015; Persing and Ng, 2016; Stab and Gurevych, 2017) . AM in dialogues and macro-level models have received less attention (Bentahar et al., 2010; Chakrabarty et al., 2019b) .", "cite_spans": [ { "start": 338, "end": 364, "text": "(Peldszus and Stede, 2015;", "ref_id": "BIBREF10" }, { "start": 365, "end": 386, "text": "Persing and Ng, 2016;", "ref_id": "BIBREF12" }, { "start": 387, "end": 411, "text": "Stab and Gurevych, 2017)", "ref_id": "BIBREF15" }, { "start": 482, "end": 505, "text": "(Bentahar et al., 2010;", "ref_id": "BIBREF0" }, { "start": 506, "end": 532, "text": "Chakrabarty et al., 2019b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we extend the work of Chakrabarty et al. (2019b) in AM for persuasive online discussions. Particularly, we take advantage of a multi-task learning (MTL) approach to automatically identify the argument structures in persuasive dialogues that contain both micro-level and macrolevel argumentation. We identify argument components (claim, components, non-argumentative) and two types of relations: intra-turn relations within one post and inter-turn relations across posts. Our results demonstrate that using MTL improves the performance of both argument component and intra-turn/inter-turn relation classification. However, further analysis shows that the tasks in the MTL configuration should be chosen carefully depending on the focused task. We then try several techniques to increase the innate low precision of the relation classification tasks due to the highly imbalanced data, specifically SMOTE (Chawla et al., 2002) and XGBoost (Chen and Guestrin, 2016) . Our results demonstrate that SMOTE is not very helpful but XGBoost, when used with the representations learnt from MTL, can increase the precision and F-scores of the relation identification tasks.", "cite_spans": [ { "start": 37, "end": 63, "text": "Chakrabarty et al. (2019b)", "ref_id": "BIBREF3" }, { "start": 343, "end": 381, "text": "(claim, components, non-argumentative)", "ref_id": null }, { "start": 911, "end": 938, "text": "SMOTE (Chawla et al., 2002)", "ref_id": null }, { "start": 951, "end": 976, "text": "(Chen and Guestrin, 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work is closely related to Chakrabarty et al. (2019b) . Their system, called AMPERSAND, tackles three AM tasks on a dataset created from the Change My View (CMV) subreddit 1 (Hidey et al., 2017) and focuses on transfer learning approaches with BERT (Devlin et al., 2019) that take advantage of discourse and dialogue context. Specifically, they define three separate tasks: argument component classification and intra/inter relation identification. For the first task, the requirement is to classify a given sentence into either Claim, Premise or Non-argumentative. For the intra-relation identification task, given a pair of argumentative sentences from the same post, we need to answer if an argumentative relation between these two sentences exists. The inter-relation identification task is similar, except that the two sentences are from different posts. However, they treated the tasks of argument component classification and relation prediction separately and had independent BERT models for the tasks. Our approach works on the assumption that the three tasks are related to each other.", "cite_spans": [ { "start": 31, "end": 57, "text": "Chakrabarty et al. (2019b)", "ref_id": "BIBREF3" }, { "start": 178, "end": 198, "text": "(Hidey et al., 2017)", "ref_id": "BIBREF8" }, { "start": 253, "end": 274, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Many studies have shown that jointly learning several tasks during training usually leads to better performance in NLP problems (S\u00f8gaard and Goldberg, 2016; Yang et al., 2016; Liu et al., 2019; Peng et al., 2020) . Focusing on one single domain and dataset, Eger et al. (2017) treats AM as a sequence tagging problem and uses sub-tasks such as component identification and relation classification as auxiliaries in MTL to improve performances. Schulz et al. (2018) also formalizes argument component identification as a sequence tagging problem but utilizes multiple datasets from different domains in their MTL setup. They observe that the results on a small AM dataset can be improved when other AM datasets are leveraged as auxiliary tasks. These approaches, however, work on monologues where each data instance is from one person and therefore ignore the macro-structure of arguments. Our work tackles AM at dialogical level, specifically on online discussion forums. We hypothesize that MTL can help represent both micro and macro structure and use BERT with a MTL setup to classify argument components and relations at the same time.", "cite_spans": [ { "start": 128, "end": 156, "text": "(S\u00f8gaard and Goldberg, 2016;", "ref_id": "BIBREF14" }, { "start": 157, "end": 175, "text": "Yang et al., 2016;", "ref_id": "BIBREF17" }, { "start": 176, "end": 193, "text": "Liu et al., 2019;", "ref_id": "BIBREF9" }, { "start": 194, "end": 212, "text": "Peng et al., 2020)", "ref_id": "BIBREF11" }, { "start": 258, "end": 276, "text": "Eger et al. (2017)", "ref_id": "BIBREF7" }, { "start": 444, "end": 464, "text": "Schulz et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use the same data from Chakrabarty et al. (2019b) . They reuse the CMV corpus (Hidey et al., 2017) , where each sentence in a thread of the CMV subreddit is annotated as claim, premise or nonargumentative. Additionally, they annotate the argument relation among these propositions (interturn/intra-turn) and expand the corpus by annotating additional argument components using the same guidelines.", "cite_spans": [ { "start": 26, "end": 52, "text": "Chakrabarty et al. (2019b)", "ref_id": "BIBREF3" }, { "start": 81, "end": 101, "text": "(Hidey et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The final dataset consists of 112 threads with 2756 sentences. The proportions of claims, premises and non-argumentative components are 34%, 43% and 23% respectively. Although several types of relations are annotated, the relation identification task only uses a binary label to represent if a relation exists between two components. The dataset is highly imbalanced in terms of relations, with only 4.6% of 27254 possible pairs having intra-turn and only 3.2% of 26695 having inter-turn relations, making low precision a major modeling challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Below is an example of a discussion. User A makes a claim and supports it with a premise (intraturn relation). User B, however, disagrees with the reasoning made by user A (inter-turn relation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "A: [I think the biggest threat to global stability comes from the political fringes.] 0:CLAIM [It has been like that in the past.] 1:PREMISE:SUPPORT:0 B: [What happened in the past has nothing to do with the present] 2:ATTACK:1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Realizing that the data size is small, Chakrabarty et al. (2019b) utilizes distant-labeled data and uses transfer learning for fine-tuning BERT depending on the context. The IMHO+context dataset (Chakrabarty et al., 2019a ) is used as micro-level context data. This is a corpus of opinionated claims in the form of sentences containing the internet acronyms IMO (in my opinion) or IMHO (in my humble opinion) from Reddit. The assumtion is that a relation exists between a sentence containing IMHO and the following one. For macro-level context data, they use the Reddit quote feature and construct the QR dataset containing quote-response pairs. In Reddit, when responding to a post, a user can quote another user's response and this feature is used to highlight what part of someone's argument a particular user is targeting in the CMV corpus. Specifically, the QR dataset treats the quoted text and the following sentence as a positive inter-turn relation example. For a fair comparison, we also fine-tune BERT using both distant datasets.", "cite_spans": [ { "start": 195, "end": 221, "text": "(Chakrabarty et al., 2019a", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We use AMPERSAND's (Chakrabarty et al., 2019b) two relation classification constraints. For intra-turn relations, the source has to be a premise and the target can be a premise or a claim. For inter-turn relations, the source must be a claim.", "cite_spans": [ { "start": 19, "end": 46, "text": "(Chakrabarty et al., 2019b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We follow the architecture of Liu et al. (2019) for MTL. It has lower BERT encoder layers shared across all tasks with task-specific classification layers on top of them. In this procedure, each task can be either single-sentence classification or sentence pair classification, which fits our tasks of component classification and relation identification. The latter task can be further divided into intra-turn and inter-turn relations, resulting in three tasks in total.", "cite_spans": [ { "start": 30, "end": 47, "text": "Liu et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-task Learning on BERT", "sec_num": "4.1" }, { "text": "We have three MTL configurations, each one represents a different combination of tasks incorporated in the MTL process. First, all three tasks are used for MTL (MTL_ALL). Second, only argument component and intra-turn relation tasks are used (MTL_intra). Third, in MTL_inter, the intra-turn relation classification task is excluded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task Learning on BERT", "sec_num": "4.1" }, { "text": "Our reason is that intra-turn and inter-turn relations can be different in nature and including inter-turn prediction could possibly degrade intra-turn prediction, or vice versa. The argument component classification task is essential for both relation identification tasks since it helps filter out pairs of sentences that do not follow the constraints. Thus, it is kept in all MTL configurations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-task Learning on BERT", "sec_num": "4.1" }, { "text": "Due to imbalanced data with less than 5% of pairs having relations, low precision is expected. AM-PERSAND (Chakrabarty et al., 2019b) attempts to increase intra-turn relation precision with window clipping. Specifically, the best F-scores are reported when limiting the prediction of an intraturn relation to be within a window of 1. Since this approach only works for intra-turn relations and is dependent on the data, we instead try two universal approaches which are corpus-independent to raise model precision. SMOTE (Chawla et al., 2002) is an oversampling technique where synthetic samples are generated for the minority class. It focuses on the feature space to generate new instances by using interpolation between positive instances that lie together. Gradient boosting is also useful when data is highly skewed (Brown and Mues, 2012; Teramoto, 2009) . We experiment using XGBoost (Chen and Guestrin, 2016) , a decision-tree-based boosting algorithm, as the classifier on top of the BERT representation instead of the normal softmax layer.", "cite_spans": [ { "start": 106, "end": 133, "text": "(Chakrabarty et al., 2019b)", "ref_id": "BIBREF3" }, { "start": 515, "end": 542, "text": "SMOTE (Chawla et al., 2002)", "ref_id": null }, { "start": 821, "end": 843, "text": "(Brown and Mues, 2012;", "ref_id": "BIBREF1" }, { "start": 844, "end": 859, "text": "Teramoto, 2009)", "ref_id": "BIBREF16" }, { "start": 890, "end": 915, "text": "(Chen and Guestrin, 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Low Precision in Relation Prediction", "sec_num": "4.2" }, { "text": "In the MTL_ALL setup, we first fine-tune the BERT model on the IMHO+context and QR datasets using both the masked language modeling and next sentence prediction objectives. We then fine-tune BERT using MTL by learning the three tasks jointly. For the MTL_intra configurations, only the IMHO+context data is used for the first fine-tuning step and only the argument component classification and intra-turn identification tasks are used in the MTL procedures. The same settings are applied for MTL_inter, but the QR data is used for the first fine-tuning step instead. Peng et al. (2020) observe that additional finetuning after the training process can increase performance. They remove the last layer, which is basically a linear and a softmax layer on top of the BERT representation to make the final classification, of the trained model and replace it with a new untrained one. Then they use a smaller learning rate to continue training all layers on each specific task. We call this step refinement.", "cite_spans": [ { "start": 567, "end": 585, "text": "Peng et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "AMPERSAND uses an additional RST classifier and ensembles its result with the prediction from the BERT classifier to predict the existence of a relation. Rhetorical Structure Theory (RST) provides an explanation for the coherence of text, in the form of a tree where leaves represent elementary discourse units and other nodes represent discourse relations. Specifically, they create a RST parse tree for the concatenated two argumentative components and take the predicted discourse relation at the root of the parse tree as a categorical feature in a binary classifier. They also use a candidate target selection procedure built from extractive summarization for inter-turn relation identification. Since these two are not involved in training and only work as additional filters, we keep them unchanged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "For XGBoost, since the 768-dimension vector from BERT is too large, we reduce the dimension to 128 using a two-layer neural network (512 and 128 neurons, respectively). We did an experiment and see that this reduction only affects performance with XGBoost, so we keep the 128 dimensions for all models to make the comparisons fair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.3" }, { "text": "Using the same train/test split from Ampersand (10% of the data for testing) (Chakrabarty et al., 2019b), we compare our results with AMPER-SAND. In Tables 1, 2 and 3, XG stands for XG-Boost, SMO for SMOTE, and refine for the refinement step from Sec. 4.3. Best results are in bold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "To make the comparisons more consistent with our models, we applied refinement and XGBoost on top of the final BERT representation of AMPER-SAND. The reported numbers from the second row of the tables are from our own rerun of Ampersand and therefore they are slightly different from ones in the original paper. Although we used the AMPERSAND published code, the difference in Pytorch version could be the cause for this discrepancy. The best results of AMPERSAND from the original paper (Chakrabarty et al., 2019b) are reported in the first rows of the tables as Ampersand*. for Claim, Premise and Non-argumentative respectively. When the inter-turn relation identification task is removed from the MTL configuration, MTL_intra model observes a slight drop in Premise (0.4%) and NA (1.3%) but a small increase in Claim (0.5%). On the other hand, taking out the intra-turn relation identification task (MTL_inter) degrades the F-scores in all categories. This implies that in our setting, intra-turn relation identification plays a crucial role in classifying components. Furthermore, including only inter-turn relation identification can hurt component classification as MTL_inter is inferior to AMPERSAND.", "cite_spans": [ { "start": 488, "end": 515, "text": "(Chakrabarty et al., 2019b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The additional fine-tuning step on each separate task also helps boost the F-scores. We witness slight increases in all of the three classes for all of the MTL configurations and the Ampersand model with this refinement step. Our best results are obtained with the MTL_ALL model with refinement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argumentative Component Classification", "sec_num": "5.1" }, { "text": "For each metric in Tables 2 and 3, we report results with both gold-standard (G) and predicted (P) components from the argument component classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Prediction", "sec_num": "5.2" }, { "text": "The results from Table 2 demonstrate that MTL is still helpful in this task. Both MTL_ALL and MTL_intra have higher F-scores in comparison with the equivalent version of AMPERSAND.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Intra-turn Relations", "sec_num": "5.2.1" }, { "text": "MTL_intra models outperform the equivalent MTL_ALL models in terms of precision and F 1 scores. This suggests that we should eliminate the inter-turn relation task from MTL specifically for the intra-turn relation task. Our reasoning is that the inter-turn relations have some special characteristics and are harder to identify, which leads to the decrease in performance for the intra-turn task when it is included in the MTL configuration. The refinement step generally helps improve the performance, but the gain is not very remarkable, especially in the case of MTL_ALL when the increases in F-score are less than 1 point for both gold and predicted components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-turn Relations", "sec_num": "5.2.1" }, { "text": "The XGBoost classifier raises the already low precision scores noticeably for MTL. For both MTL configurations, precision scores are increased by at least 2.7 points while recall scores are not decreased by more than 0.4 points. This leads to an improvement in F-scores based on predicted components of 4.5 points for MTL_ALL and 4.8 points for MTL_intra. Our best results are obtained by using XGBoost on the features of MTL_intra. In contrast, SMOTE does not help much.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intra-turn Relations", "sec_num": "5.2.1" }, { "text": "For this task, the results from Table 3 demonstrate that MTL models still generally outperform the comparable baselines, but the gap is marginal com-pared to the previous two tasks.", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Inter-turn Relations", "sec_num": "5.2.2" }, { "text": "In contrast to intra-turn relation prediction, removing the intra-turn task does not always improve the result of the inter-turn task. In other words, MTL_inter models do not always outperform the equivalent MTL_ALL model. Also, the gain with XGBoost is now smaller, with less than 2 points in F-scores for both MTL configurations, regardless of gold-standard or predicted components. The reason is due to a now large recall drop (e.g., 9.3% and 6.8% drop for MTL_ALL and MTL_inter respectively, with predicted components).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-turn Relations", "sec_num": "5.2.2" }, { "text": "For predicted components, MTL_ALL with XG-Boost achieves the best F-score, while for gold components, MTL_inter with XGBoost is best.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-turn Relations", "sec_num": "5.2.2" }, { "text": "6 Qualitative Analysis of Intra-turn Degradation using MTL_ALL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-turn Relations", "sec_num": "5.2.2" }, { "text": "One noticeable observation from Section 5.2.1 and Table 2 is that the incorporation of the inter-turn prediction task into the MTL process indeed hurts the performance of intra-turn. To further analyze this phenomenon, we retrieve examples which were predicted correctly by MTL_intra but incorrectly by MTL_ALL. In many of these examples, there is a wrong \"inference\" that if A has an inter-turn relation with C and B has an inter-turn relation with C, then A has an intra-turn relation with B. Below is a concrete example of this error. C 0 and C 1 are two argumentative components from post P 1 , while C 2 and C 3 are two consecutive argumentative components from another post P 2 replying to P 1 . The MTL_ALL model predicts there exists an intra-turn relation between C 2 and C 3 , which is incorrect. In this example, C 0 presents a claim that \"There have been many dark animated movies that become famous\" and premise C 1 supports this claim with two examples of \"The Iron Giant\" and \"Land Before Time\". Although both C 2 and C 3 challenge the connection from one of the two mentioned movies to the claim of C 0 , there should not be an intra-turn relation between them. C 2 and C 3 may both support a claim attacking C 0 , but they do not support or attack each other. P 1 : [There have been a great many \"dark\" animated movies and shows that grew to become extremely famous.] C 0 :CLAIM [If we're using a level of \"dark\" of the level of Brave Little Toaster then why did things like The Iron Giant and Land Before Time get a ton of love associated with them.] C 1 :PREMISE:SUPPORT:0 P 2 : [Land before Time has about 15 other movies in the franchise which make it popular, much like toy story,] C 2 :PREMISE:ATTACK:C 1 and [Iron Giant doesn't really deal with much that's terribly dark or controversial.] C 3 :PREMISE:ATTACK:C 1 This type of error raises the number of false positive cases in intra-turn relation identification. As a result, the precision scores of MTL_ALL are inferior to MTL_intra.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Inter-turn Relations", "sec_num": "5.2.2" }, { "text": "We show that using multi-task learning with micro and macro structures represented improves the performance of argumentative component classification and two relation prediction tasks, both with and without refinement. Also, we observe that combining all tasks may not always be beneficial since we can have conflicts between some of them. Further, our results demonstrate that using the XGBoost model as the final classifier on top of the representation from BERT, while not affecting the recall much, raises the precision scores for the intra-turn and inter-turn relation tasks. In sum, we achieve better results with MTL compared to the singletask training of Chakrabarty et al. (2019b) , with and without our refinement and XGBoost enhancements. Future plans include leveraging contextual information to further improve performance and conducting further analyses on the incompatibility between intra-turn and inter-turn relation identification task in MTL.", "cite_spans": [ { "start": 663, "end": 689, "text": "Chakrabarty et al. (2019b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://www.reddit.com/r/changemyview", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Ahmed Magooda, Mingzhi Yu, Muhammad Salem and Ravneet Singh for their constructive feedback on the initial draft of the paper and the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A taxonomy of argumentation models used for knowledge representation", "authors": [ { "first": "Jamal", "middle": [], "last": "Bentahar", "suffix": "" }, { "first": "Bernard", "middle": [], "last": "Moulin", "suffix": "" }, { "first": "Micheline", "middle": [], "last": "B\u00e9langer", "suffix": "" } ], "year": 2010, "venue": "Artificial Intelligence Review", "volume": "33", "issue": "3", "pages": "211--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jamal Bentahar, Bernard Moulin, and Micheline B\u00e9langer. 2010. A taxonomy of argumentation mod- els used for knowledge representation. Artificial In- telligence Review, 33(3):211-259.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An experimental comparison of classification algorithms for imbalanced credit scoring data sets", "authors": [ { "first": "Iain", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Mues", "suffix": "" } ], "year": 2012, "venue": "Expert Systems with Applications", "volume": "39", "issue": "3", "pages": "3446--3453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iain Brown and Christophe Mues. 2012. An experi- mental comparison of classification algorithms for imbalanced credit scoring data sets. Expert Systems with Applications, 39(3):3446-3453.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "IMHO fine-tuning improves claim detection", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "558--563", "other_ids": { "DOI": [ "10.18653/v1/N19-1054" ] }, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Christopher Hidey, and Kathy McKeown. 2019a. IMHO fine-tuning improves claim detection. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 558-563, Minneapolis, Minnesota. As- sociation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "AMPERSAND: Argument mining for PER-SuAsive oNline discussions", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2933--2943", "other_ids": { "DOI": [ "10.18653/v1/D19-1291" ] }, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019b. AMPERSAND: Argument mining for PER- SuAsive oNline discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2933-2943, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "SMOTE: synthetic minority over-sampling technique", "authors": [ { "first": "V", "middle": [], "last": "Nitesh", "suffix": "" }, { "first": "Kevin", "middle": [ "W" ], "last": "Chawla", "suffix": "" }, { "first": "Lawrence", "middle": [ "O" ], "last": "Bowyer", "suffix": "" }, { "first": "W Philip", "middle": [], "last": "Hall", "suffix": "" }, { "first": "", "middle": [], "last": "Kegelmeyer", "suffix": "" } ], "year": 2002, "venue": "Journal of artificial intelligence research", "volume": "16", "issue": "", "pages": "321--357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. SMOTE: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research, 16:321-357.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Xgboost: A scalable tree boosting system", "authors": [ { "first": "Tianqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16", "volume": "", "issue": "", "pages": "785--794", "other_ids": { "DOI": [ "10.1145/2939672.2939785" ] }, "num": null, "urls": [], "raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 785-794, New York, NY, USA. Associa- tion for Computing Machinery.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural end-to-end learning for computational argumentation mining", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "11--22", "other_ids": { "DOI": [ "10.18653/v1/P17-1002" ] }, "num": null, "urls": [], "raw_text": "Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 11-22, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Analyzing the semantic types of claims and premises in an online persuasive forum", "authors": [ { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Musi", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "11--21", "other_ids": { "DOI": [ "10.18653/v1/W17-5102" ] }, "num": null, "urls": [], "raw_text": "Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises in an online persuasive forum. In Proceedings of the 4th Workshop on Argument Mining, pages 11- 21, Copenhagen, Denmark. Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multi-task deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": { "DOI": [ "10.18653/v1/P19-1441" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Joint prediction in MST-style discourse parsing for argumentation mining", "authors": [ { "first": "Andreas", "middle": [], "last": "Peldszus", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "938--948", "other_ids": { "DOI": [ "10.18653/v1/D15-1110" ] }, "num": null, "urls": [], "raw_text": "Andreas Peldszus and Manfred Stede. 2015. Joint pre- diction in MST-style discourse parsing for argumen- tation mining. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Processing, pages 938-948, Lisbon, Portugal. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An empirical study of multi-task learning on BERT for biomedical text mining", "authors": [ { "first": "Yifan", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Qingyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhiyong", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", "volume": "", "issue": "", "pages": "205--214", "other_ids": { "DOI": [ "10.18653/v1/2020.bionlp-1.22" ] }, "num": null, "urls": [], "raw_text": "Yifan Peng, Qingyu Chen, and Zhiyong Lu. 2020. An empirical study of multi-task learning on BERT for biomedical text mining. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Pro- cessing, pages 205-214, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "End-to-end argumentation mining in student essays", "authors": [ { "first": "Isaac", "middle": [], "last": "Persing", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1384--1394", "other_ids": { "DOI": [ "10.18653/v1/N16-1164" ] }, "num": null, "urls": [], "raw_text": "Isaac Persing and Vincent Ng. 2016. End-to-end ar- gumentation mining in student essays. In Proceed- ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1384-1394, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multi-task learning for argumentation mining in low-resource settings", "authors": [ { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Kahse", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "35--41", "other_ids": { "DOI": [ "10.18653/v1/N18-2006" ] }, "num": null, "urls": [], "raw_text": "Claudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, and Iryna Gurevych. 2018. Multi-task learning for argumentation mining in low-resource settings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 35-41, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep multitask learning with low level tasks supervised at lower layers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "231--235", "other_ids": { "DOI": [ "10.18653/v1/P16-2038" ] }, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi- task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231-235, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Parsing argumentation structures in persuasive essays", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "3", "pages": "619--659", "other_ids": { "DOI": [ "10.1162/COLI_a_00295" ] }, "num": null, "urls": [], "raw_text": "Christian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43(3):619-659.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Balanced gradient boosting from imbalanced data for clinical outcome prediction. Statistical applications in genetics and molecular biology", "authors": [ { "first": "Reiji", "middle": [], "last": "Teramoto", "suffix": "" } ], "year": 2009, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reiji Teramoto. 2009. Balanced gradient boosting from imbalanced data for clinical outcome predic- tion. Statistical applications in genetics and molec- ular biology, 8(1).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multi-task cross-lingual sequence tagging from scratch", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2016. Multi-task cross-lingual sequence tag- ging from scratch. CoRR, abs/1603.06270.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "content": "
shows that compared to Ampersand, in two
MTL configurations MTL_ALL and MTL_intra,
", "text": "", "html": null, "type_str": "table" }, "TABREF2": { "num": null, "content": "
MethodPrecision G PRecall GPF-score G P
Ampersand* 18.9 17.5 79.4 75.6 30.5 28.3
Ampersand18.7 17.1 79.4 75.1 30.3 27.9
+ refine19.3 18.1 77.8 75.1 30.9 29.2
/w XG17.0 16.5 73.1 68.9 27.6 26.6
MTL_ALL20.3 18.2 79.1 74.5 32.3 29.3
+ refine20.3 18.3 79.1 74.5 32.5 29.4
/w SMO20.5 18.0 78.8 74.9 32.5 29.0
/w XG21.2 19.5 75.7 65.2 33.1 29.7
MTL_inter20.1 17.9 79.1 74.0 32.1 28.8
+ refine20.2 18.3 79.0 74.2 32.2 29.4
/w SMO20.0 18.2 79.4 74.9 32.0 29.3
/w XG21.5 18.8 77.5 67.4 33.7 29.4
", "text": "Results for Intra-turn Relation Prediction", "html": null, "type_str": "table" }, "TABREF3": { "num": null, "content": "", "text": "Results for Inter-turn Relation Prediction", "html": null, "type_str": "table" } } } }