ACL-OCL / Base_JSON /prefixA /json /argmining /2020.argmining-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
46.6 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:02.579205Z"
},
"title": "Annotating argumentation in Swedish social media",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Lindahl",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Spr\u00e5kbanken Text University of Gothenburg",
"location": {}
},
"email": "anna.lindahl@svenska.gu.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's \u03ba and Krippendorff's \u03b1 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's \u03ba and Krippendorff's \u03b1 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, argumentation mining has grown into a central research topic within the field of computational linguistics. With the aim of automatically identifying and analyzing argumentation in text, its envisioned applications are many, from more effective document retrieval to learning aids (Lawrence and Reed, 2020) . There are many different approaches to how argumentation can be modeled and annotated and there are now many data sets of different size and level of annotation, with domains ranging from legal documents to social media. However, in many of the existing data sets, inter-annotator agreement is not very high and it is because annotating argumentation turns out to be a quite challenging task. There is still a need of more annotated data, as well as investigating how to reliably annotate data. It is also important to investigate other languages than English. Because of this, we have conducted a small annotation study on Swedish social-media data where the focus has been on identifying instances of argumentation but not analyzing them further. 1 This is both to select documents for further analysis of the identified argumentation instances but also in order to investigate how reliably annotators can agree on what is argumentation or not.",
"cite_spans": [
{
"start": 298,
"end": 323,
"text": "(Lawrence and Reed, 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Annotating with the aim to distinguish what is argumentative from what is not argumentative has not been the most common goal in argumentation mining, although it is necessarily part of studies that annotate components of argumentation, either implicitly or explicitly as a first step in an argumentation mining pipeline. When it comes to documents from the web, the annotation of argumentation is usually done with respect to a topic. For example Habernal et al. (2014) , annotated comments and blog posts as argumentative with respect to a topic in order to select documents for further annotation. On this they reach a 0.51 Fleiss \u03ba and 0.59 Cohen's \u03ba. Similarly, Habernal and Gurevych (2017) annotated documents from web discourse as 'non-persuasive' and 'on topic persuasive' before moving on to annotate microstructure. They reached Fleiss \u03ba of 0.59 on this task. In some studies presence of argumentation has been annotated together with the stance or the type of the argumentation. For example, Stab et al. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.",
"cite_spans": [
{
"start": 448,
"end": 470,
"text": "Habernal et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 667,
"end": 695,
"text": "Habernal and Gurevych (2017)",
"ref_id": "BIBREF2"
},
{
"start": 1003,
"end": 1014,
"text": "Stab et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "1 In the literature it seems that the assumption is made that argumentation is universally present in all languages and that its form is comparable across languages. This is obviously subject to empirical verification, but we have not seen any literature addressing this question. Impressionistically, descriptions of the kinds and structure of argumentation made for English seem to apply also to Swedish, but more thorough studies of this would be needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "(2018) annotated sentences from the web for supporting or opposing argument, or not an argument with respect to a topic. They reached Cohen's \u03ba 0.72 and an observed agreement between 0.86-0.84. More recently, Trautmann et al. (2020) annotated sentences from the web with both expert and crowd-sourced annotators. The sentences were annotated with argument spans, and the spans were marked with stance with respect to a topic. The reached 0.6 Krippendorff's \u03b1 u and the crowd-sourced annotators reached 0.71 \u03b1 u .",
"cite_spans": [
{
"start": 209,
"end": 232,
"text": "Trautmann et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The data in this study is from two of Sweden's largest online discussion forums, Familjeliv (\"Family life\" FM), and Flashback (FB). Familjeliv is generally considered to be more about relationships and family life and Flashback more about politics, although both forums cover a broad range of topics. Both forums have a simple thread structure, where a thread is started with a post by a user and then other users reply with subsequent posts, shown in chronological order. There is a possibility for the users to cite each other, but there is no visually explicit tree structure as for example on Reddit. For this study, nine threads were randomly chosen among the threads which had a length of about 30 posts. These threads are shown in table 1. Threads 1-5 are from Familjeliv and threads 6-9 are from Flashback. We employed 8 annotators in this study: one expert (the author) and 7 with linguistic background. For the annotation, the annotation tool WebAnno (Eckart de Castilho et al., 2016) was used. The annotators were asked to annotate spans of argumentation, the spans could not overlap but otherwise there was no restriction on span length. Argumentation was only to be annotated within posts. The annotation guidelines 2 provide the annotators with a definition of argumentation, inspired by a simplified version of the definition given in Van Eemeren et al. (2013) . The definition also includes persuasiveness, as this is a fundamental part of argumentation, as discussed in Habernal and Gurevych (2017) among others. The definition is seen below, and says that argumentation should include:",
"cite_spans": [
{
"start": 1369,
"end": 1375,
"text": "(2013)",
"ref_id": null
},
{
"start": 1487,
"end": 1515,
"text": "Habernal and Gurevych (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "1. A standpoint/stance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "2. This standpoint is expressed with claims, backed by reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "3. There is a real or imagined difference of opinion concerning this standpoint which leads to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "4. the intent to persuade a real or imagined other part about the standpoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "What is considered as argumentation or an argument in argumentation mining tasks varies and is often adjusted to fit the task or the domain, see for example Bosc et al. (2016) who annotated tweets containing opinions as arguments due to the implicit argumentation on Twitter. In some studies a definition of argumentation is not given, but rather definitions of what is being annotated, for example argumentative components such as premises or claims. The definition described here is not meant to cover all phenomena which could be considered argumentative, the intent is to describe something which hopefully annotators can apply successfully and agree on. From this definition above these three questions were derived:",
"cite_spans": [
{
"start": 157,
"end": 175,
"text": "Bosc et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 Does the poster's text signal that he or she is taking a stance / has a standpoint?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 Does the poster motivate why?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 Do you perceive the poster as trying to persuade someone?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "If the annotator considered the answer to be affirmative for all the questions for some span of text, they were instructed to mark it as argumentation. In addition to these questions two tests were supplied in order to aid in answering the questions. The first test asked the annotator to reformulate the argumentation as \"A, because B\", in order to answer the first two questions. The second test asked the annotator to insert \"I agree/I don't agree\" into the text. If doing so would not change the meaning of the text, this might indicate that the poster is arguing, and was intending to persuade. These two tests were not meant to give a definite answer but rather to guide the annotators. The guidelines also included examples of argumentation from the forums, as well as examples on how to apply the tests. Four of the annotators were also asked to write down the reformulation of the \"A because of B\" test in the annotation tool. We've chosen to treat the results from the all the annotators equally in this study as we've yet to analyze the reformulations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The annotators took between 4.5 and 12 hours each to annotate all the threads. The annotators which had to write down a reformulation took the longer time. Table 2 shows annotation statistics for each annotator. Annotator A is the expert annotator and seems not to diverge from the others. The annotators annotated mostly one argument per post, in some cases two arguments per post (compare number of arguments and number of posts in table 2). The annotators differ in how many argument spans they have annotated. The annotators also differ in how many sentences on average they have included in the argumentation spans, which is reflected in how many of the total tokens they have annotated. The annotators usually marked spans respecting sentence boundaries, but sometimes annotated half a sentence. When a post was annotated with a span, all but one annotator annotated at least half the post on average.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotation statistics",
"sec_num": "4.2"
},
{
"text": "When calculating inter-annotator agreement (IAA) sentences were considered as being argumentative if at least half of the tokens in it were labeled as argumentation, posts were considered as being argumentative if they contained at least one argument span. Observed agreement for tokens are 25%, for sentences 40% and 39% for posts. If we include posts where all but one annotator agree, observed agreement is 60% and if we include posts were all but two agree it's 86%. 70% of all the posts are labeled with an argument span by at least one of the annotators; 47% of those posts are annotated with a span by at least 6 of the annotators. Cohen's \u03ba was measured pair-wise for all annotators and, as used in Toledo et al. (2019) , averages from Cohen's \u03ba were calculated and are shown in Koch, 1977) . Table 4 shows Krippendorff's \u03b1, for each thread and in total. \u03b1 varies between threads. IAA is the highest for posts. In order to compare the annotators observed agreement and \u03b1 were calculated holding out each annotator. Holding out annotator E had the largest effect, changing observed agreement on post level from 0.39 to 0.45 and post level \u03b1 from 0.48 to 0.52.",
"cite_spans": [
{
"start": 707,
"end": 727,
"text": "Toledo et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 787,
"end": 798,
"text": "Koch, 1977)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 801,
"end": 808,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Inter-annotator agreement",
"sec_num": "4.3"
},
{
"text": "A manual inspection of the annotation of posts was done on the two threads with highest \u03b1, thread 6 and 4, and the two threads with lowest \u03b1, threads 5 and 7. These four threads cover different topics, but the ones with lower \u03b1 have fewer tokens and shorter posts. High agreement was deemed to be when 6 or more annotators agreed, otherwise the agreement was considered low. High agreement seemed to occur when the poster is very explicit with his or her opinion and writes it in terms of \"I\" and not \"one\". Explicitly addressing a previous user, using confrontational language and contradicting also seems to occur within high agreement posts. Below is an example of a post were all annotators agreed it contained argumentation. The poster is clearly taking a stance, and is also signaling that they think the person they are addressing doesn't know what they are talking about.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the annotation results",
"sec_num": "4.4"
},
{
"text": "\"So? And how do you think the children are feeling right now? That it's so hard to live with their dad that they'd rather refrain from doing it altogether? It doesn't matter that you thought it was boring to not live with your boyfriend. I agree with the others in this thread that you should stop living together. For the sake of the children. You can't just think of yourself.\" Disagreements between the annotators seemed to occur when a poster is not explicit with his or her stance or opinion, as well when the poster is using irony. Implicit argumentation (if there is any) such as that will force the annotator to interpret what's not being said in the text and this probably caused disagreement. General statements that are not tied explicitly to the opinion of the poster also seem to cause disagreements. The post below has a similar message as the previous example, but this poster is more sarcastic, and the argumentation is more implicit, if there is any. Here the annotators disagreed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the annotation results",
"sec_num": "4.4"
},
{
"text": "\"A three-year old should be grateful because you split up his parents? Oh my god! Are you for real?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the annotation results",
"sec_num": "4.4"
},
{
"text": "Another example of disagreement is seen in the post below where the user could be interpreted as speculating, rather than arguing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the annotation results",
"sec_num": "4.4"
},
{
"text": "\"The popularity of first names is varying over generations. Names that were popular in the 1900's first half such as Albin, Arvid etc ., have returned a bit. Names which were common a few decades ago, Johan, Andreas, Magnus, and Anders seem to have completely disappeared now. I think Anders is or was at least a few years ago the most common name for persons in high positions in the business world.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the annotation results",
"sec_num": "4.4"
},
{
"text": "The guidelines asked for a stance or standpoint, which might be why posts where the author is clearly taking a stance have high agreement. The third condition, the intent to persuade, might be the reason posts with confrontational (and sometimes condescending) language have high agreement -if someone strongly disagrees with someone they might also intent to persuade them that they are wrong.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the annotation results",
"sec_num": "4.4"
},
{
"text": "IAA values such as the ones reported here are not uncommon in argumentation mining tasks. Still, both the Cohen's \u03ba of 0.48 and the Krippendorff's \u03b1 0.48 are lower than the previously reported studies, (for example 0.59 Cohen's \u03ba in Habernal et al. (2014) or 0.71 Krippendorff's \u03b1 u in Trautmann et al. (2020) ). However, as opposed to those studies, the annotators were not asked to annotate with respect to a topic, so the results are not fully comparable. Annotating only 9 threads might have affected the IAA, especially since the IAA varied between the threads. When manually inspecting the annotations, it seemed as when the conditions asked for in the guidelines were very explicitly met, annotators agreed. When the argumentation (or not argumentation) was more implicit the annotators disagreed. This is something which has to be considered when further developing the guidelines. Another thing to consider when annotating complex phenomena such as argumentation is that even though the annotators disagree, it might not be the case that one is right and the other is wrong. As shown in for example Lindahl et al. (2019) there are cases where two different annotations could both be considered correct. If one allows for several annotations to be correct, this would need to be reflected in both the guidelines and evaluation.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "Habernal et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 286,
"end": 309,
"text": "Trautmann et al. (2020)",
"ref_id": null
},
{
"start": 1108,
"end": 1129,
"text": "Lindahl et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions & future directions",
"sec_num": "5"
},
{
"text": "In the future we plan to test the guidelines in a domain where one can assume that people are more explicit with their argumentation, such as newspapers. We also plan to extend the guidelines to annotate components of argumentation to see how this affects the annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions & future directions",
"sec_num": "5"
},
{
"text": "Please note that the guidelines were written in Swedish, which means some of the nuances of the following descriptions might be lost in translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work presented here has been partly supported by an infrastructure grant to Spr\u00e5kbanken Text, University of Gothenburg, for contributing to building and operating a national e-infrastructure funded jointly by the participating institutions and the Swedish Research Council (under contract no. 2017-00626). We would also like to thank the anonymous reviewers for their constructive comments and feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "DART: a dataset of arguments and their relations on Twitter",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Bosc",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1258--1263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Bosc, Elena Cabrio, and Serena Villata. 2016. DART: a dataset of arguments and their relations on Twitter. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1258-1263, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A web-based tool for the integrated annotation of semantic and syntactic structures",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "\u00c9va",
"middle": [],
"last": "M\u00fajdricza-Maydt",
"suffix": ""
},
{
"first": "Silvana",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH)",
"volume": "",
"issue": "",
"pages": "76--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Eckart de Castilho,\u00c9va M\u00fajdricza-Maydt, Seid Muhie Yimam, Silvana Hartmann, Iryna Gurevych, Anette Frank, and Chris Biemann. 2016. A web-based tool for the integrated annotation of semantic and syntactic structures. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humani- ties (LT4DH), pages 76-84, Osaka, Japan, December. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Argumentation mining in user-generated web discourse",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "43",
"issue": "",
"pages": "125--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Habernal and Iryna Gurevych. 2017. Argumentation mining in user-generated web discourse. 43(1):125- 179.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Argumentation mining on the web from information seeking perspective",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Eckle-Kohler",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "ArgNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Habernal, Judith Eckle-Kohler, and Iryna Gurevych. 2014. Argumentation mining on the web from informa- tion seeking perspective. In ArgNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The measurement of observer agreement for categorical data",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Landis",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "33",
"issue": "1",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Bio- metrics, 33(1):159-174.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Argument mining: A survey",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Linguistics",
"volume": "45",
"issue": "4",
"pages": "765--818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lawrence and Chris Reed. 2020. Argument mining: A survey. Computational Linguistics, 45(4):765-818.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards assessing argumentation annotation -a first step",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Lindahl",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Jacobo",
"middle": [],
"last": "Rouces",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "177--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Lindahl, Lars Borin, and Jacobo Rouces. 2019. Towards assessing argumentation annotation -a first step. In Proceedings of the 6th Workshop on Argument Mining, pages 177-186, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cross-topic argument mining from heterogeneous sources",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3664--3674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3664-3674. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic argument quality assessment -new datasets and methods",
"authors": [
{
"first": "Assaf",
"middle": [],
"last": "Toledo",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Gretz",
"suffix": ""
},
{
"first": "Edo",
"middle": [],
"last": "Cohen-Karlik",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Venezian",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Lahav",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5625--5635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019. Automatic argument quality assessment -new datasets and methods. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5625-5635. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hinrich Sch\u00fctze, and Iryna Gurevych. 2020. Finegrained argument unit recognition and classification",
"authors": [
{
"first": "Dietrich",
"middle": [],
"last": "Trautmann",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
}
],
"year": null,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "9048--9056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dietrich Trautmann, Johannes Daxenberger, Christian Stab, Hinrich Sch\u00fctze, and Iryna Gurevych. 2020. Fine- grained argument unit recognition and classification. In AAAI, pages 9048-9056.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fundamentals of argumentation theory: A handbook of historical backgrounds and contemporary developments",
"authors": [
{
"first": "Rob",
"middle": [],
"last": "Frans H Van Eemeren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grootendorst",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ralph",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Charles A",
"middle": [],
"last": "Plantin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Willard",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frans H Van Eemeren, Rob Grootendorst, Ralph H Johnson, Christian Plantin, and Charles A Willard. 2013. Fun- damentals of argumentation theory: A handbook of historical backgrounds and contemporary developments. Routledge.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Average Cohen's \u03ba -sents 0.44 0.42 0.30 0.33 0.30 0.38 0.38 0.35 0.35 Average Cohen's \u03ba -posts 0.52 0.53 0.42. 0.46 0.39 0.55 0.48 0.52 0.48",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>4 Annotation</td></tr><tr><td>4.1 Annotation guidelines and setup</td></tr></table>",
"text": "Thread statistics",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>no. of arg posts</td><td>% of tokens annotated</td><td>avg no. sent / arg span</td></tr></table>",
"text": "Annotator F has the highest Annotator no. arg spans no. arg tokens no. arg sents",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Annotation statistics for each annotator. average \u03ba, 0.55, and annotator E has the lowest average. Values between 0.21 and 0.40 are considered fair agreement, values between 0.41 and 0.61 are considered moderate agreement (Landis and",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Krippendorff's \u03b1 Thread</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>All</td></tr><tr><td>Tokens</td><td colspan=\"7\">0.311 0.187 0.419 0.118 0.355 0.358 0.31</td><td colspan=\"3\">0.166 0.166 0.296</td></tr><tr><td>Sents</td><td colspan=\"2\">0.365 0.22</td><td colspan=\"8\">0.434 0.112 0.486 0.462 0.398 0.299 0.327 0.356</td></tr><tr><td>Posts</td><td colspan=\"10\">0.525 0.363 0.676 0.425 0.437 0.412 0.573 0.309 0.369 0.482</td></tr></table>",
"text": "Average Cohen's \u03ba for each annotator.",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Krippendorff's \u03b1.",
"num": null
}
}
}
}