ACL-OCL / Base_JSON /prefixA /json /argmining /2021.argmining-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
92.2 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:06.990349Z"
},
"title": "Self-trained Pretrained Language Models for Evidence Detection",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Elaraby",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": "dlitman@pitt.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Argument role labeling is a fundamental task in Argument Mining research. However, such research often suffers from a lack of largescale datasets labeled for argument roles such as evidence, which is crucial for neural model training. While large pretrained language models have somewhat alleviated the need for massive manually labeled datasets, how much these models can further benefit from self-training techniques hasn't been widely explored in the literature in general and in Argument Mining specifically. In this work, we focus on self-trained language models (particularly BERT) for evidence detection. We provide a thorough investigation on how to utilize pseudo labels effectively in the selftraining scheme. We also assess whether adding pseudo labels from an out-of-domain source can be beneficial. Experiments on sentence level evidence detection show that selftraining can complement pretrained language models to provide performance improvements.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Argument role labeling is a fundamental task in Argument Mining research. However, such research often suffers from a lack of largescale datasets labeled for argument roles such as evidence, which is crucial for neural model training. While large pretrained language models have somewhat alleviated the need for massive manually labeled datasets, how much these models can further benefit from self-training techniques hasn't been widely explored in the literature in general and in Argument Mining specifically. In this work, we focus on self-trained language models (particularly BERT) for evidence detection. We provide a thorough investigation on how to utilize pseudo labels effectively in the selftraining scheme. We also assess whether adding pseudo labels from an out-of-domain source can be beneficial. Experiments on sentence level evidence detection show that selftraining can complement pretrained language models to provide performance improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the area of Argument Mining, obtaining highquality manually labeled data is often a complicated and expensive task (Habernal et al., 2018) . Therefore, utilizing unlabeled data can help to achieve further improvements over standard supervised models. Recently, pretrained language models have achieved significant improvements in a wide variety of downstream NLP tasks (Kenton and Toutanova, 2019; Liu et al., 2019; Yang et al., 2019) . These models utilize large unlabeled datasets to learn meaningful representations that are transferable across several tasks. In this work, we extend the utilization of unlabelled data in Argument Mining by proposing to use pretrained language models in a self-training manner. We focus on evidence detection which is an essential component in building natural language systems that are capable of arguing and debating.",
"cite_spans": [
{
"start": 118,
"end": 141,
"text": "(Habernal et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 372,
"end": 400,
"text": "(Kenton and Toutanova, 2019;",
"ref_id": null
},
{
"start": 401,
"end": 418,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 419,
"end": 437,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Self-training is a semi-supervised technique that employs unlabeled data by using a teacher model, trained on labeled data, to generate pseudo labels out of unlabeled examples (Yarowsky, 1995; Scudder, 1965) . The pseudo labeled data are then blended with manually labeled data to train a student model of a similar or larger size of the teacher model. Another way to use pseudo labels is to blend them with labeled data and train a smaller student model. This method is referred to as knowledge distillation (Hinton et al., 2015) .",
"cite_spans": [
{
"start": 176,
"end": 192,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF30"
},
{
"start": 193,
"end": 207,
"text": "Scudder, 1965)",
"ref_id": "BIBREF21"
},
{
"start": 509,
"end": 530,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we seek to answer the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q1. Under which conditions can self-training improve finetuned pretrained language models? To answer this question, we experiment with three different techniques to utilize the automatically generated pseudo labels to assess whether large pretrained models can benefit from self-training or not. (a) Bootstrapped self-training: where we select automatically annotated instances of high confidence and add them to the training data during finetuning. (b) Pretrain on pseudo labels: where we use the selected samples in pretraining the model before finetuning on the manually labeled set. (c) Masked language model pretraining: where we use the selected samples to pretrain the model using a masked language model objective before finetuning the model on the manually labeled set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q2. Is there a constraint on the domain of unlabeled data? In our experiments, we employ both in-domain and out-of-domain unlabeled corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q3. Does increasing the similarity between labeled and unlabeled data improve self-training?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide a retrieval step to filter unlabeled data by considering the most similar N samples to each example in the training data . Our aim is to increase the similarity between labeled and unlabeled data by filtration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows. (a) We propose a thorough investigation of self-training meth-ods for evidence detection over large pretrained language models. (b) We empirically show that with proper adjustments, self-training can indeed achieve improvements over a pretrained baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Argument Mining spans various lines of research work. Stab and Gurevych (2014) , Habernal and Gurevych (2017) , and Persing and Ng (2015) focus on identifying and classifying argument roles in text. Another direction is to mine argument units that are relevant to specific claims or topics (Shnarch et al., 2018; Biran and Rambow, 2011; Levy et al., 2017) . Our work directly extends the work in this area by Shnarch et al. (2018) . Evidence detection, as viewed in this work, aims at classifying relevant sentences to a certain topic. Shnarch et al. (2018) benefits from large-scale weakly labeled data described in Levy et al. (2017) blended with manually annotated data to train a BiLSTM with GLoVe embeddings and integrate the topic with an attention mechanism. Our work also aims at making use of weakly labeled data. However, instead of using the retrieved data described in Levy et al. 2017directly, we employ a teacher model trained on the manually annotated data to generate the pseudo labels for the unlabeled set.",
"cite_spans": [
{
"start": 54,
"end": 78,
"text": "Stab and Gurevych (2014)",
"ref_id": "BIBREF24"
},
{
"start": 81,
"end": 109,
"text": "Habernal and Gurevych (2017)",
"ref_id": "BIBREF6"
},
{
"start": 116,
"end": 137,
"text": "Persing and Ng (2015)",
"ref_id": "BIBREF16"
},
{
"start": 290,
"end": 312,
"text": "(Shnarch et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 313,
"end": 336,
"text": "Biran and Rambow, 2011;",
"ref_id": "BIBREF0"
},
{
"start": 337,
"end": 355,
"text": "Levy et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 409,
"end": 430,
"text": "Shnarch et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 536,
"end": 557,
"text": "Shnarch et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 617,
"end": 635,
"text": "Levy et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work falls under the umbrella of semisupervised learning. In natural language processing, the recent use of unlabeled data has focused on pretraining large language models (Kenton and Toutanova, 2019; Howard and Ruder, 2018; Radford et al., 2018), which has led to remarkable improvements across a wide variety of NLP downstream tasks. In Argument Mining, Chakrabarty et al. (2019) retrieved sentences with seeds as In My Opinion (IMO) and In My Humble Opinion (IMHO) and used them to finetune a language model before finetuning on a claims dataset. Reimers et al. (2019) used contextualized embeddings (ELMo and BERT) to classify and cluster argument components. In their work, they reported an 80% accuracy when finetuning BERT over the evidence data described in Shnarch et al. (2018) and 81% when concatenating the topic with the input evidence. Following the same line of work, we first finetune BERT over the same evidence dataset and use it as our baseline model. We then extend finetuned BERT by utilizing it as a teacher in a self-training manner, which hasn't been explored in the literature before.",
"cite_spans": [
{
"start": 554,
"end": 575,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 770,
"end": 791,
"text": "Shnarch et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Self-training has proven to be beneficial in a wide variety of tasks (Yalniz et al., 2019; Xie et al., 2020; Zoph et al., 2020; Kahn et al., 2020; Pino et al., 2020; Wang et al., 2021) . In natural language processing, Ruder and Plank (2018) evaluated several semi-supervised baselines on sentiment analysis and part of speech tagging. They built their experiment over a BiLSTM baseline. While their work established solid baselines for semi-supervised learning in neural models, their methods focused on recurrent neural models. On the other hand, our experiments study self-training in the context of large pretrained language models.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Yalniz et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 91,
"end": 108,
"text": "Xie et al., 2020;",
"ref_id": "BIBREF27"
},
{
"start": 109,
"end": 127,
"text": "Zoph et al., 2020;",
"ref_id": null
},
{
"start": 128,
"end": 146,
"text": "Kahn et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 147,
"end": 165,
"text": "Pino et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 166,
"end": 184,
"text": "Wang et al., 2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Self-trained pretrained language models haven't been studied extensively in the literature. Du et al. (2021) studied self-training as another way of leveraging unlabeled data on top of large pretrained language models. They achieved 2.6% improvements in standard classification tasks. Khalifa et al. (2021) used simple bootstrapped self-training over BERT to improve zero-shot and few-shot classification of Arabic dialects. We instead use self-training on top of pretrained language models for sentence-level evidence classification. Our main incentive is to explore the utility of self-training techniques in argument mining where acquiring manually labeled data is usually hard to get in large quantities.",
"cite_spans": [
{
"start": 285,
"end": 306,
"text": "Khalifa et al. (2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We make use of the IBM debater evidence dataset 1 created by Shnarch et al. (2018) , which is composed of 118 topics chosen from various debate portals. The dataset consists of 5785 topic-dependent sentences in total split into two sets: 4066 instances for training and 1719 instances for testing. For each topic, Shnarch et al. (2018) retrieved sentences from Wikipedia which are then manually annotated by 10 workers per topic. The crowd-annotators either select whether a sentence is evidence or nonevidence for a given topic. Table 1 shows examples from the evidence and non-evidence sentences in the IBM debater evidence dataset.",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "Shnarch et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 314,
"end": 335,
"text": "Shnarch et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 530,
"end": 537,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Manually labeled dataset",
"sec_num": "3.1"
},
{
"text": "In our experiments, we rely on two sets of unlabeled (for evidence) data. The first one is an in-domain argumentative corpus from Wikipedia. The second one is a slightly out of domain (not sentences Topic: \"We should limit executive compensation\" evidence A February 2009 report, published by the Institute for Policy Studies notes the impact excessive executive compensation has on taxpayers: U.S. taxpayers subsidize excessive executive compensation -by more than $20 billion per year -via a variety of tax and accounting loopholes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unlabeled datasets",
"sec_num": "3.2"
},
{
"text": "A say on pay -a non-binding vote of the general meeting to approve director pay packages, is practised in a growing number of countries. Wikipedia unlabeled data. Following the method described in Levy et al. (2017) , we use the data retrieved by querying Wikipedia with a query composed of \"that\" + topic_concept. The work done by (Shnarch et al., 2018) suggests that the previous query can yield argumentative sentences in general, not just claims . The resultant corpus is composed of 29k candidate sentences. Table 2 shows an example unlabeled retrieved sentence out of Wikipedia. Webis-Debate-16 dataset. In order to assess whether out of domain unlabeled data can improve performance, we experiment with the Webis-Debate-16 corpus 2 (created from debates extracted from idebate.org) as an unlabeled argumentative source for evidence detection. The dataset is labeled on the sentence level with argumentative versus non-argumentative labels. The dataset contains 10846 argumentative phrases and 5556 nonargumentative phrases, therefore, we utilize it as an unlabeled (for evidence) argumentative source. Table 3 shows examples of non-argumentative and argumentative sentences from the Webis-Debate-16 corpus.",
"cite_spans": [
{
"start": 197,
"end": 215,
"text": "Levy et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 332,
"end": 354,
"text": "(Shnarch et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1109,
"end": 1116,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "non-evidence",
"sec_num": null
},
{
"text": "2 https://webis.de/data/webis-debate-16.html sentences Debate Topic: \"Economy\" nonargumentative the price tag was set as being \u00a332.7billion. argumentative \"high speed two will help to solve this inequality by increasing connections between north and south .\" ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "non-evidence",
"sec_num": null
},
{
"text": "Our primary goal is to investigate different methods of self-training on top of large pretrained language models for evidence extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We start with finetune-BERT as our baseline, which achieved the best results on IBM Debater evidence dataset in Reimers et al. (2019) . In our experiments, we refer to this baseline as evidenceBERT.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finetune-BERT baseline",
"sec_num": "4.1"
},
{
"text": "Our first self-training setting is bootstrapped selftraining, where we employ evidenceBERT to annotate unlabeled data. Every epoch, we make predictions over unlabeled data U . For each instance x in U , we extract the probability assigned to the most likely class p(x) = argmaxM (x) where x U and M is our evidenceBERT. The examples are then ranked based on the probabilities and the top N examples selected. In our experiments, we determine N by choosing the percentile of the top examples. Due to limited computational resources, we vary the percentiles from 10% to maximum 50%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapped self-training",
"sec_num": "4.2"
},
{
"text": "Weakly labeled data has been used in pre-training neural models in information retrieval (Dehghani et al., 2017) and sentiment analysis (Severyn and Moschitti, 2015) . We employ the generated pseudo labels from evidenceBERT to initially finetune BERT before finetuning over the manually labeled set.",
"cite_spans": [
{
"start": 89,
"end": 112,
"text": "(Dehghani et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 136,
"end": 165,
"text": "(Severyn and Moschitti, 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrain on pseudo labels",
"sec_num": "4.3"
},
{
"text": "In these experiments, we use the top N examples from the automatically labeled data to train on a masked language model objective. We finetune the masked language model using Wolf et al. (2019) . ",
"cite_spans": [
{
"start": 175,
"end": 193,
"text": "Wolf et al. (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Masked language model pretraining",
"sec_num": "4.4"
},
{
"text": "Following the selection from unlabeled data method in Du et al. (2021) , we encode manually labeled and unlabeled datasets using XMLR RoBERTa (Conneau et al., 2020) which achieves state-of-the-art results on the Semantic Textual Similarity benchmark (Cer et al., 2017) . Then, for each instance in the manually labeled set, we select the top 5 nearest neighbors from the unlabeled set. We hypothesize that this process will yield more evidence like data from the whole argumentative set. We employ the new retrieved set in the three configurations of self-training we experiment with.",
"cite_spans": [
{
"start": 54,
"end": 70,
"text": "Du et al. (2021)",
"ref_id": "BIBREF5"
},
{
"start": 142,
"end": 164,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 250,
"end": 268,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "K-nearest neighbors based filtration",
"sec_num": "4.5"
},
{
"text": "After presenting the baseline pretrained finetuned language model results, we discuss how adding the different self-training approaches sheds light on the three research questions introduced in Section 1. evidenceBERT. We start by replicating the results in Reimers et al. (2019) , using the BERT implementation from Wolf et al. (2019) . For training hyperparameters, we finetune BERT for 6 epochs of training. We employ Adam (Kingma and Ba, 2014) optimizer for training and an initial learning rate of 4e \u2212 5. We chose our training parameters based on a manual search optimized on 5% of the training data. We achieve an accuracy of 80% on the test set, which is almost the same result reported in Reimers et al. (2019) .",
"cite_spans": [
{
"start": 258,
"end": 279,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
},
{
"start": 317,
"end": 335,
"text": "Wolf et al. (2019)",
"ref_id": "BIBREF26"
},
{
"start": 698,
"end": 719,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Q1. Under which conditions can self-training improve finetuned pretrained language models? Results suggest that by adding appropriate N pseudo examples, bootstrapped self-training and masked language model pretraining can improve accuracy over evidenceBERT. Figure 1 shows that both bootstrapped self-training and masked language model pretraining can improve accuracy by 1% to 2% over finetuned BERT at optimal N . 3 While both masked language model pretraining and bootstrapped self-training can improve performance, the masked language model pretraining is more robust to the selected N pseudo examples (e.g., masked language modeling never degrades baseline performance). On the other hand, Figure 1 implies that utilizing pseudo examples in regular pretraining always performs poorly when compared to the evidenceBERT baseline.",
"cite_spans": [
{
"start": 396,
"end": 417,
"text": "BERT at optimal N . 3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 695,
"end": 701,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Q2. Is there a constraint on the domain of unlabeled data? Comparing Figure 1 C and D to Figure 1 A and B, respectively, suggests that using an unlabeled corpus from a different distribution like Webis-Debate-16 largely can achieve similar improvements as using unlabeled Wikipedia. This is true for masked language model pretraining and bootstrapped self-training, both when comparing results to evidenceBERT at optimal N pseudo examples (except for masked language modeling in C versus A) and with respect to robustness over N .",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 89,
"end": 97,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Q3. Does increasing the similarity between labeled and unlabeled data improve self-training? Comparing Figure 1 B and D to Figure 1 A and C, respectively, shows that increasing the similarity between labeled and unlabeled data yields improvements in terms of accuracy at optimal N , particularly when using out of domain unlabeled data as both masked language modeling and bootstrapped self-training improve. Robustness also improves.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We explored a variety of self-training configurations for evidence detection on top of BERT. Results show that 1) self-training with bootstrapped self-training and masked language model pretraining (but not with pretrain on pseudo labels) can improve finetuned large pretrained language models such as BERT; 2) unlabeled data can be utilized from both in-domain (Wikipedia) or out-of-domain (2021) in classification tasks outside of argument mining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "(Webis-Debate-16) sources; and 3) filtration of unlabeled data via selecting nearest neighbors with semantic similarity improves results. Future plans include covering more Argument Mining tasks and domains, optimizing selection of nearest neighbors instead of using fixed k, and using various blending techniques of pseudo labeled data during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://www.research.ibm.com/artificialintelligence/project-debater/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A similar level of improvement was found byDu et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Ahmed Magooda, Nhat Tran, and Mahmoud Azab for their fruitful comments and corrections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Identifying justifications in written dialogs",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE Fifth International Conference on Semantic Computing",
"volume": "",
"issue": "",
"pages": "162--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran and Owen Rambow. 2011. Identifying justifi- cations in written dialogs. In 2011 IEEE Fifth Inter- national Conference on Semantic Computing, pages 162-168. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), pages 1-14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Imho fine-tuning improves claim detection",
"authors": [
{
"first": "Tuhin",
"middle": [],
"last": "Chakrabarty",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hidey",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "558--563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuhin Chakrabarty, Christopher Hidey, and Kathleen McKeown. 2019. Imho fine-tuning improves claim detection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 558-563.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, \u00c9douard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural ranking models with weak supervision",
"authors": [
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Zamani",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Jaap",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "W Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural rank- ing models with weak supervision. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 65-74.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Self-training improves pre-training for natural language understanding",
"authors": [
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Beliz",
"middle": [],
"last": "Gunel",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Onur",
"middle": [],
"last": "Celebi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "5408--5418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingfei Du, \u00c9douard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Veselin Stoyanov, and Alexis Conneau. 2021. Self-training improves pre-training for natural language under- standing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 5408-5418.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Argumentation mining in user-generated web discourse",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "1",
"pages": "125--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Habernal and Iryna Gurevych. 2017. Argumenta- tion mining in user-generated web discourse. Com- putational Linguistics, 43(1):125-179.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The argument reasoning comprehension task: Identification and reconstruction of implicit warrants",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1930--1940",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. The argument reasoning comprehension task: Identification and reconstruc- tion of implicit warrants. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1930-1940.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02531"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Selftraining for end-to-end speech recognition",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Awni",
"middle": [],
"last": "Hannun",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7084--7088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Kahn, Ann Lee, and Awni Hannun. 2020. Self- training for end-to-end speech recognition. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 7084-7088. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Self-training pre-trained language models for zero-and few-shot multi-dialectal arabic sequence labeling",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Khalifa",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Khaled",
"middle": [],
"last": "Shaalan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "769--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Khalifa, Muhammad Abdul-Mageed, and Khaled Shaalan. 2021. Self-training pre-trained lan- guage models for zero-and few-shot multi-dialectal arabic sequence labeling. In Proceedings of the 16th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Main Volume, pages 769-782.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised corpus-wide claim detection",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Gretz",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Sznajder",
"suffix": ""
},
{
"first": "Shay",
"middle": [],
"last": "Hummel",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Levy, Shai Gretz, Benjamin Sznajder, Shay Hum- mel, Ranit Aharonov, and Noam Slonim. 2017. Un- supervised corpus-wide claim detection. In Pro- ceedings of the 4th Workshop on Argument Mining, pages 79-84.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Modeling argument strength in student essays",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Persing",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "543--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Persing and Vincent Ng. 2015. Modeling argu- ment strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Self-training for endto-end speech translation",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Qiantong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Javad"
],
"last": "Dousti",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-training for end- to-end speech translation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classification and clustering of arguments with contextualized word embeddings",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "567--578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of ar- guments with contextualized word embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 567- 578.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Strong baselines for neural semi-supervised learning under domain shift",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1044--1054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder and Barbara Plank. 2018. Strong base- lines for neural semi-supervised learning under do- main shift. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044-1054.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Probability of error of some adaptive pattern-recognition machines",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Scudder",
"suffix": ""
}
],
"year": 1965,
"venue": "IEEE Transactions on Information Theory",
"volume": "11",
"issue": "3",
"pages": "363--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Trans- actions on Information Theory, 11(3):363-371.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Unitn: Training deep convolutional neural network for twitter sentiment classification",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "464--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Unitn: Training deep convolutional neural network for twitter sentiment classification. In Proceedings of the 9th international workshop on semantic eval- uation (SemEval 2015), pages 464-469.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Will it blend? blending weak and strong labeled data in a neural network for argumentation mining",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Shnarch",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Alzate",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Dankin",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Gleize",
"suffix": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "599--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyal Shnarch, Carlos Alzate, Lena Dankin, Mar- tin Gleize, Yufang Hou, Leshem Choshen, Ranit Aharonov, and Noam Slonim. 2018. Will it blend? blending weak and strong labeled data in a neu- ral network for argumentation mining. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 599-605.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Identifying argumentative discourse structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "46--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive es- says. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46-56.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semi-supervised singing voice separation with noisy self-training",
"authors": [
{
"first": "Zhepei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ritwik",
"middle": [],
"last": "Giri",
"suffix": ""
},
{
"first": "Umut",
"middle": [],
"last": "Isik",
"suffix": ""
},
{
"first": "Jean-Marc",
"middle": [],
"last": "Valin",
"suffix": ""
},
{
"first": "Arvindh",
"middle": [],
"last": "Krishnaswamy",
"suffix": ""
}
],
"year": 2021,
"venue": "ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "31--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhepei Wang, Ritwik Giri, Umut Isik, Jean-Marc Valin, and Arvindh Krishnaswamy. 2021. Semi-supervised singing voice separation with noisy self-training. In ICASSP 2021-2021 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 31-35. IEEE.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Self-training with noisy student improves imagenet classification",
"authors": [
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "10687--10698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. 2020. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10687-10698.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Billion-scale semisupervised learning for image classification",
"authors": [
{
"first": "Herv\u00e9",
"middle": [],
"last": "I Zeki Yalniz",
"suffix": ""
},
{
"first": "Kan",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "Manohar",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Paluri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mahajan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.00546"
]
},
"num": null,
"urls": [],
"raw_text": "I Zeki Yalniz, Herv\u00e9 J\u00e9gou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. 2019. Billion-scale semi- supervised learning for image classification. arXiv preprint arXiv:1905.00546.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural infor- mation processing systems, 32.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "33rd annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189-196.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Golnaz",
"middle": [],
"last": "Ghiasi",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yin",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": null,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training. Ad- vances in Neural Information Processing Systems, 33.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Self-training techniques results",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"num": null,
"text": "Labeled Wikipedia examples in the IBM Debater evidence dataset",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Wikipedia) argumentative corpus called Webis-</td></tr><tr><td>Debate-16, that is unlike Wikipedia, constructed</td></tr><tr><td>from online debates.</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>: (Non-)Argumentative examples (unlabeled</td></tr><tr><td>for evidence) in the Webis-Debate-16 dataset</td></tr></table>"
}
}
}
}