{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:56.638413Z" }, "title": "Citizen Involvement in Urban Planning -How Can Municipalities Be Supported in Evaluating Public Participation Processes for Mobility Transitions?", "authors": [ { "first": "Julia", "middle": [], "last": "Romberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University D\u00fcsseldorf", "location": {} }, "email": "julia.romberg@hhu.de" }, { "first": "Stefan", "middle": [], "last": "Conrad", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University", "location": { "settlement": "D\u00fcsseldorf" } }, "email": "stefan.conrad@hhu.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Public participation processes allow citizens to engage in municipal decision-making processes by expressing their opinions on specific issues. Municipalities often only have limited resources to analyze a possibly large amount of textual contributions that need to be evaluated in a timely and detailed manner. Automated support for the evaluation is therefore essential, e.g. to analyze arguments. In this paper, we address (A) the identification of argumentative discourse units and (B) their classification as major position or premise in German public participation processes. The objective of our work is to make argument mining viable for use in municipalities. We compare different argument mining approaches and develop a generic model that can successfully detect argument structures in different datasets of mobility-related urban planning. We introduce a new data corpus comprising five public participation processes. In our evaluation, we achieve high macro F 1 scores (0.76-0.80 for the identification of argumentative units; 0.86-0.93 for their classification) on all datasets. Additionally, we improve previous results for the classification of argumentative units on a similar German online participation dataset.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Public participation processes allow citizens to engage in municipal decision-making processes by expressing their opinions on specific issues. Municipalities often only have limited resources to analyze a possibly large amount of textual contributions that need to be evaluated in a timely and detailed manner. Automated support for the evaluation is therefore essential, e.g. to analyze arguments. In this paper, we address (A) the identification of argumentative discourse units and (B) their classification as major position or premise in German public participation processes. The objective of our work is to make argument mining viable for use in municipalities. We compare different argument mining approaches and develop a generic model that can successfully detect argument structures in different datasets of mobility-related urban planning. We introduce a new data corpus comprising five public participation processes. In our evaluation, we achieve high macro F 1 scores (0.76-0.80 for the identification of argumentative units; 0.86-0.93 for their classification) on all datasets. Additionally, we improve previous results for the classification of argumentative units on a similar German online participation dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In many democratic countries, political decisions are increasingly developed through the participation of citizens. Public participation processes allow citizens to voice their suggestions and concerns on specific issues, for example in urban planning, and thus influence decision-making processes. Participation can take place in formats that vary from on-site events such as citizen workshops, to written submissions via letter or e-mail, and to online platforms where citizens can discuss proposals digitally. Building on Scharpf (1999) , we can distinguish two main goals of public participation processes. On the one hand, the additional input provided by citizens can influence the decision-making process and, potentially, lead to more effective policies. On the other hand, citizens are assumed to develop a higher acceptance of the output when given an opportunity to participate and, ultimately, the resulting decisions have a higher legitimacy.", "cite_spans": [ { "start": 525, "end": 539, "text": "Scharpf (1999)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to be able to include citizen comments in the further decision-making process, those comments first have to be evaluated. However, both offline and online participation formats have the potential to generate a high number of responses (Shulman, 2003; Schlosberg et al., 2008) , e.g., thousands of contributions. Along with stringent schedules in decision-making processes, this often poses major challenges for municipalities. Still, participation contributions are commonly evaluated manually with considerable effort. Therefore, if municipalities do not have enough resources (human or monetary) to shoulder this effort, the detailed evaluation will have to be cut back. As a result, opinions might be completely omitted or not been taken into account equally. This in turn can have a negative influence on the goals of public participation processes. Filtering out individual or mass opinions risks loosing important clues for effective policies. It can also endanger citizens' confidence in the opportunity to participate in decision-making and weaken civic engagement (Mendelson, 2012) . Besides, decision acceptance is influenced by perceived fairness (Esaiasson, 2010) .", "cite_spans": [ { "start": 244, "end": 259, "text": "(Shulman, 2003;", "ref_id": "BIBREF36" }, { "start": 260, "end": 284, "text": "Schlosberg et al., 2008)", "ref_id": "BIBREF35" }, { "start": 1082, "end": 1099, "text": "(Mendelson, 2012)", "ref_id": "BIBREF23" }, { "start": 1167, "end": 1184, "text": "(Esaiasson, 2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automating the evaluation of public participation processes can help overcome these problems (OECD, 2004) and has been addressed by research initiatives such as the Cornell eRulemaking Initiative (CeRI) 1 and, more recently, the Citizen participation and machine learning for a better democracy project 2 . Over the years, several tasks that arise in the evaluation process have been high-lighted. These include thematic classification and clustering of citizen contributions (e.g. Purpura et al., 2008; Arana-Catania et al., 2021; Teufl et al., 2009) , summarization of similar content (e.g Arana-Catania et al., 2021) , detection of duplicates (e.g. Yang et al., 2006) , and analysis of arguments and opinions (e.g. Park and Cardie, 2014; Lawrence et al., 2017) .", "cite_spans": [ { "start": 93, "end": 105, "text": "(OECD, 2004)", "ref_id": "BIBREF27" }, { "start": 482, "end": 503, "text": "Purpura et al., 2008;", "ref_id": "BIBREF31" }, { "start": 504, "end": 531, "text": "Arana-Catania et al., 2021;", "ref_id": null }, { "start": 532, "end": 551, "text": "Teufl et al., 2009)", "ref_id": "BIBREF37" }, { "start": 592, "end": 619, "text": "Arana-Catania et al., 2021)", "ref_id": null }, { "start": 652, "end": 670, "text": "Yang et al., 2006)", "ref_id": "BIBREF39" }, { "start": 718, "end": 740, "text": "Park and Cardie, 2014;", "ref_id": "BIBREF28" }, { "start": 741, "end": 763, "text": "Lawrence et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on arguments in public participation processes that address sustainable mobility and land use in Germany. German cities have involved their citizens in hundreds of decisionmaking processes on these issues in recent years. 3 We look at five of them in detail, four of which are processes for concrete improvements to cycling infrastructure and one of which is a strategic process for creating a general mobility concept for a city. At the same time, we consider two very different participation formats, namely online platforms and questionnaires.", "cite_spans": [ { "start": 246, "end": 247, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper's first objective is to analyze the strengths and weaknesses of previously published argument mining approaches for public participation processes when they are applied to different German datasets. Our attention is focused on the classification of text segments as argumentative or non-argumentative, as well as on the downstream classification of argumentation components. In addition to our datasets, we include the only other German public participation dataset (to the best of our knowledge) for argument mining (Liebeck et al., 2016) in the evaluation.", "cite_spans": [ { "start": 528, "end": 550, "text": "(Liebeck et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our second objective is to improve the results obtained on the datasets under consideration by the previous approaches for both classification tasks. For this we apply BERT (Devlin et al., 2019) which is known to perform very well on many tasks including argument mining.", "cite_spans": [ { "start": 173, "end": 194, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In practice, the use of argument mining to evaluate public participation processes only adds value when the benefits outweigh the effort. Manual coding of data and the training or fine-tuning of machine learning models are costly. In addition, machine learning requires expert knowledge and usually cannot be performed directly by the municipalities. An optimal solution would be a universally valid model that can be applied flexibly to new datasets. Our third objective is hence to investigate the extent to which trained models can recognize argument structures in other public participation processes that were not part of the training process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are: (1) We present a new data corpus of five mobility-related public participation processes that vary in content and format. The German corpus comprises 17, 306 sentences coded with an argument scheme tailored to informal public participation processes. 2We perform a broad comparison of previously published best approaches for argument mining in public participation processes, which so far have been evaluated mostly on distinct datasets. We compare the algorithms directly on our data corpus and compare the performances. 3We show that BERT surpasses previously published argument mining approaches for public participation processes on German data for both tasks. Especially when classifying argument components, macro F 1 results improve by between 0.05 and 0.12 depending on the dataset. (4) In a cross-dataset evaluation, we show that BERT models trained on one dataset can recognize argument structures in other public participation datasets (which were not part of the training) with comparable goodness of fit. This finding is an important step towards practical application in municipalities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Mining arguments in the domain of citizen participation has been the subject of several studies. Much of this work centers on U.S. e-rulemaking initiatives, where citizens are given the opportunity for feedback on rule proposals. An early attempt to identify, classify, and relate arguments in e-rulemaking was made by Kwon et al. (2006) ; . Arguments were built as trees of claims and subclaims or main-support with support relations. Eidelman and Grom (2019) extended the detection of generic argument components (support and opposition) with corpus-specific argument types. Niculae et al. (2017) , Galassi et al. (2018) and Cocarascu et al. (2020) differentiate between five proposition types (fact, testimony, value, policy, and reference) and evidence or reason relations. In addition, other research examined specific properties of argumentation and discourse in public participation processes. Park and Cardie (2014) identified the lack of appropriate justifications as a common problem in the analysis of citizen contributions and tried to predict whether and by what means a proposal is verifiable. Subsequent work was presented by Park et al. (2015) and Guggilla et al. (2016) . Furthermore, Lawrence et al. (2017) and Konat et al. (2016) investigated discourse analysis in more detail and measured controversy and divisiveness in argument graphs.", "cite_spans": [ { "start": 319, "end": 337, "text": "Kwon et al. (2006)", "ref_id": "BIBREF17" }, { "start": 436, "end": 460, "text": "Eidelman and Grom (2019)", "ref_id": "BIBREF8" }, { "start": 577, "end": 598, "text": "Niculae et al. (2017)", "ref_id": "BIBREF26" }, { "start": 601, "end": 622, "text": "Galassi et al. (2018)", "ref_id": "BIBREF11" }, { "start": 627, "end": 650, "text": "Cocarascu et al. (2020)", "ref_id": "BIBREF5" }, { "start": 901, "end": 923, "text": "Park and Cardie (2014)", "ref_id": "BIBREF28" }, { "start": 1141, "end": 1159, "text": "Park et al. (2015)", "ref_id": "BIBREF29" }, { "start": 1164, "end": 1186, "text": "Guggilla et al. (2016)", "ref_id": "BIBREF13" }, { "start": 1202, "end": 1224, "text": "Lawrence et al. (2017)", "ref_id": "BIBREF19" }, { "start": 1229, "end": 1248, "text": "Konat et al. (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Besides e-rulemaking initiatives, informal public participation processes were considered. Our work shares most similarity to Liebeck et al. (2016) who focused on a German-language process about the restructuring of a former airport area. The authors developed an argumentation scheme specifically adapted to discursive online public participation processes. With regard to languages other than German, Fierro et al. (2017) and in a followup work Giannakopoulos et al. (2019) studied a corpus consisting of over 200, 000 political arguments in Chilean Spanish dialect, derived from a participatory process to form a new constitution for Chile. The arguments were classified thematically according to constitutional concepts and also as either policies, facts or values. Further work (Morio and Fujita, 2018a,b ) paid attention to the complex structure of arguments in public online participation. Relying on a Japanese dataset, the authors presented an annotation scheme for discussion threads taking care of inner-post relations and inter-post interactions.", "cite_spans": [ { "start": 126, "end": 147, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" }, { "start": 403, "end": 423, "text": "Fierro et al. (2017)", "ref_id": "BIBREF10" }, { "start": 447, "end": 475, "text": "Giannakopoulos et al. (2019)", "ref_id": "BIBREF12" }, { "start": 783, "end": 809, "text": "(Morio and Fujita, 2018a,b", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Although the work to date has produced encouraging results, most approaches are not yet mature for practical use (e.g. with German public participation processes). Only few previous research addressed the development of general models (see Cocarascu et al. (2020) , who perform a cross-dataset comparison of baselines for relation prediction). Therefore, this paper investigates the cross-data transferability of trained models for the identification and classification of argument components in public participation processes, an investigation that is highly relevant for practical use.", "cite_spans": [ { "start": 240, "end": 263, "text": "Cocarascu et al. (2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Data Corpus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our five datasets originate from urban planning and are concerned with mobility. Four of them represent very specific processes for improving cycling as a mode of transportation, the fourth dataset stems from a more general strategic process for developing a mobility concept. These five datasets comprise different participation types, i.e., online platforms and questionnaires.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "The cycling dialogues were a pilot project for improving the cycle traffic infrastucture in three German cities, namely Bonn, Cologne and Moers. During a five-week period in 2017, citizens were able to participate (make propositions, discuss and rate propositions or comments) in a map-based online consultation 4 . While in Bonn and Moers suggestions for improvement could be made city-wide, the focus in Cologne was on a specific city district. As a result, three datasets of similar online public participation processes from different local contexts emerged. In the following, these datasets will be referred to as CD_B, CD_C and CD_M. We focus on the initial text contributions in which citizens make new proposals. CD_B is the largest dataset comprising 12, 103 sentences from 2, 364 contributions, whereas CD_C and CD_M are considerably smaller, with 366 and 459 contributions consisting of 1, 704 and 2, 193 sentences, respectively. On average, the contributions consist of 4.83, 4.66 and 4.78 sentences (\u03c3 = 2.63, \u03c3 = 3.00 and \u03c3 = 2.61) with 15.94, 15.16 and 15.43 tokens (\u03c3 = 10.92, \u03c3 = 10.45 and \u03c3 = 10.81).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cycling dialogues", "sec_num": null }, { "text": "Mobility concept Since 2019, the German city of Krefeld has been planning how the city's mobility should look like in the future. In addition to various on-site events, multiple public participation processes were carried out online. The here presented dataset MC_K includes the 2, 008 sentences of the 337 initial contributions from two interrelated online processes. In the first process, citizens were informed about the drafts of seven citywide action plans. The fields of action were urban development and regional cooperation, flowing motor vehicle traffic, commercial transport, stationary traffic, public transport, bicycle traffic, and foot traffic. As part of the planning process, citizens were asked to comment on the planned actions. The second process gave citizens the opportunity to submit concrete propositions for actions in specified city districts. Citizens wrote an average of 5.96 sentences (\u03c3 = 5.63), slightly more than in the processes described above. The average 15.25 words per sentence (\u03c3 = 10.80) resemble the cycling dialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cycling dialogues", "sec_num": null }, { "text": "Citizen questionnaire on cycling Accompanying the cycling dialogues, a postal survey was con- ducted in a randomized sample of each city's population. The citizens were asked to submit suggestions for improvements to cycling in free-text fields.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cycling dialogues", "sec_num": null }, { "text": "Respondents could fill out the questionnaire either by hand or online. In this paper, we focus on the 1, 386 citizen contributions from the city of Bonn (CQ_B) which consist of 1, 505 sentences. By comparing the length of the survey contributions (1.09 sentences on average (\u03c3 = 0.37), 7.75 tokens per sentence (\u03c3 = 6.30)) with the online platform contributions, we can clearly see that citizens write more succinct in surveys of this type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cycling dialogues", "sec_num": null }, { "text": "A key aspect of public participation is that citizens can submit their own ideas on a given topic, such as the cycling infrastructure of a city or the development of a mobility concept. One contribution from CD_B, translated into English, e.g. states: \"A new pavement is urgently needed here to be able to cycle along. The current pavement has grooves & cracks in the surface, so that cycling between Ringstra\u00dfe & Kreuzherrenstra\u00dfe is very risky, especially in wet conditions.\" The writer proposes to renew the pavement and substantiates this with the current poor and dangerous condition of the pavement. In urban planning processes, causes for suggested improvements are mostly descriptions of infrastructure problems or (perceived) planning deficits, while the propositions are measures to overcome these issues. Several interviews we conducted in 2020 with local authorities and urban planning practitioners emphasized the value in automatically recognizing the problems that citizens describe and the solutions they propose in text contributions (Romberg and Escher, 2020) . We follow the terminology of Liebeck et al. (2016) , who developed an argumentation model for informal online public participation processes based on three argument components: major positions provide \"options for actions or decisions that occur in the discussion\". In simpler terms, these are the propositions that citizens make. Premises are \"reasons that attack or support a major position, a claim or another premise\". Claims are defined as \"pro or contra stance towards a major position\". In this work, we rely on the concepts of major positions and premises, as our focus is on the detection of propositions and underlying reasons. We leave for future work the detection of pro or contra stances expressed by fellow citizens in the feedback comments on initial proposals (in the case of dialogical processes).", "cite_spans": [ { "start": 1051, "end": 1077, "text": "(Romberg and Escher, 2020)", "ref_id": "BIBREF33" }, { "start": 1109, "end": 1130, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Argumentation Model", "sec_num": "3.2" }, { "text": "Coding guidelines were developed on 201 contributions from the cycling dialogues Bonn, which were excluded from the subsequent annotation process, reducing the sentences to be coded in CD_B to 10, 442. Each sentence was labeled as nonargumentative (non-arg), major position (mpos) or premise (prem). In case a sentence contained multiple argumentation components, multi-labeling was allowed. Since contribution titles often contained parts of the argument, they were included as additional sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Process", "sec_num": "3.3" }, { "text": "We measured the inter-coder agreement on 10% of the contributions of each dataset, which were respectively annotated by three trained coders. In a subsequent curation step, disagreements were resolved by two supervisors to obtain unambiguous coding of the contributions used to measure the inter-coder agreement. High Fleiss' \u03ba values prove the reliability of the codings: 0.76 (CD_B), 0.80 (CD_C), 0.77 (CD_M), 0.73 (MC_K), and 0.76 (CQ_B). During curation, certain edge cases became obvious. We believe that this subjectivity is also reflected in a human evaluation, which is why a small deviation in coding seems acceptable, also with regard to the training of the classification algorithms. The remaining 90% of the contributions were divided equally among the coders (each 30%) and annotated independently. These sentences were not curated; however, due to the high agreement on the over 1,700 sentences that were coded by all three annotators, we assume similar reliability on the sentences labeled by one person only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Process", "sec_num": "3.3" }, { "text": "Since the approaches we compare in this pa-per are tailored to single-label classifications, we omit sentences containing both major position and premise to be addressed in future work. This affects 548 sentences (262 in CD_B, 49 in CD_C, 45 in CD_M, 69 in MC_K, and 123 in CQ_B). Table 1 shows the distribution of classes included in the evaluation across the five datasets. The majority of sentences in all datasets are argumentative, accounting for between 77.8% and 88.6%. Major positions and premises are distributed very differently throughout the datasets. While premises are made more frequently in the cycling dialogues, major positions are favored in MC_K and especially in CQ_B. The datasets are available under a Creative Commons License at https://github.com/juliaromberg/cimt-argumentmining-dataset/.", "cite_spans": [], "ref_spans": [ { "start": 281, "end": 288, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Annotation Process", "sec_num": "3.3" }, { "text": "Argument Mining can be divided into three subtasks: segmentation, segment classification, and relation identification (Peldszus and Stede, 2013) . First, argumentative text is split into argument discourse units (ADUs). Second, ADUs are classified according to their function in the argument. Third, relations between ADUs are identified. Peldszus and Stede (2013) assume here that it is known which texts are argumentative or relevant for the argumentation. Lawrence and Reed (2019) widen the first task and include the distinction between argumentative and non-argumentative units.", "cite_spans": [ { "start": 118, "end": 144, "text": "(Peldszus and Stede, 2013)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "In this work, we focus on (A) the classification of discourse units as argumentative (ADU) and non argumentative (non-ADU) and (B) the classification of ADUs according to contextual clausal properties for informal public participation processes. In the following, these two tasks will be referred to as Task A and Task B. We define each sentence as discourse unit, so that both tasks are sentence-level classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "Approaches for Public Participation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previously Applied Argument Mining", "sec_num": "4.1" }, { "text": "Our first objective is to compare the previously used approaches for solving Task A and Task B in public participation processes on our datasets. In the following, we provide an overview of these algorithms and describe in detail the setups we chose for our experiments (e.g. input features, hyperparameter selection). The results of our experiments are described and discussed in Section 5. For every dataset in consideration, we used a 5-fold crossvalidation, dividing the datasets into 80% training and 20% test data each time. We tuned algorithm hyperparameters using a grid search with crossvalidation (5 folds) for each split of the (outer) cross-validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previously Applied Argument Mining", "sec_num": "4.1" }, { "text": "All of the works considering the distinction between ADUs and non-ADUS have predefined sentences as elementary discourse units, as we do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A", "sec_num": "4.1.1" }, { "text": "SVM Kwon et al. (2006) , Liebeck et al. (2016) and Morio and Fujita (2018a) used support vector machines (Cortes and Vapnik, 1995) to detect ADUs with F 1 scores between 0.52 and 0.70. For our experiments, we adopted the best setup of Liebeck et al. (2016) since their dataset is most similar to ours. Sentences were represented as a combination of unigrams and grammatical features, more precisely a L 2 -normalized POS-Tag distribution 5 and a L 2 -normalized distribution of dependencies 6 . We used the radial basis function kernel, and considered C \u2208 {1, 10, 100} and \u03b3 \u2208 {0.001, 0.01, 0.1} in the grid search. We further weighted the training samples inversely proportional to the class frequencies to take care of the strong class imbalance of our datasets.", "cite_spans": [ { "start": 4, "end": 22, "text": "Kwon et al. (2006)", "ref_id": "BIBREF17" }, { "start": 25, "end": 46, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" }, { "start": 51, "end": 75, "text": "Morio and Fujita (2018a)", "ref_id": "BIBREF24" }, { "start": 105, "end": 130, "text": "(Cortes and Vapnik, 1995)", "ref_id": "BIBREF6" }, { "start": 235, "end": 256, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Task A", "sec_num": "4.1.1" }, { "text": "fastText Eidelman and Grom (2019) suggested the use of fastText (Joulin et al., 2017) and proposed balancing the training data for highly imbalanced datasets. By downsampling the majority class in the corresponding dataset, they improved the macro F 1 outcome from 0.80 to 0.90.", "cite_spans": [ { "start": 9, "end": 33, "text": "Eidelman and Grom (2019)", "ref_id": "BIBREF8" }, { "start": 64, "end": 85, "text": "(Joulin et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Task A", "sec_num": "4.1.1" }, { "text": "In our experiments, we trained two fastText models per dataset: One on the original, imbalanced dataset and one on a balanced version of the dataset where the majority class was undersampled by randomly picking samples. We used pretrained fast-Text embeddings for German with 50 dimensions, and included learning rates of 1e \u2212 1, 5e \u2212 1 and 9e \u2212 1, and 5 or 10 epochs of training in the grid search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A", "sec_num": "4.1.1" }, { "text": "More attention has been paid to the classification of ADUs in previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "SVM Kwon et al. (2006) , Park and Cardie (2014) , Liebeck et al. (2016) and Morio and Fujita (2018a) classified argument components in public participation processes with SVMs. Depending on the dataset and argumentation scheme, they yielded macro F 1 values in the range of 0.56 to 0.77.", "cite_spans": [ { "start": 4, "end": 22, "text": "Kwon et al. (2006)", "ref_id": "BIBREF17" }, { "start": 25, "end": 47, "text": "Park and Cardie (2014)", "ref_id": "BIBREF28" }, { "start": 50, "end": 71, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" }, { "start": 76, "end": 100, "text": "Morio and Fujita (2018a)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "For our experiments, we again relied on the closely related work of Liebeck et al. (2016) and used the same setup as described in Section 4.1.1.", "cite_spans": [ { "start": 68, "end": 89, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "fastText In Fierro et al. (2017) and Eidelman and Grom (2019) , fastText provided the best results (0.65 and 0.78). Of particular interest is that, on the Spanish dataset (Fierro et al., 2017) , fastText surpassed the SVM. We were curious to see if this behavior applies to our datasets as well.", "cite_spans": [ { "start": 12, "end": 32, "text": "Fierro et al. (2017)", "ref_id": "BIBREF10" }, { "start": 37, "end": 61, "text": "Eidelman and Grom (2019)", "ref_id": "BIBREF8" }, { "start": 171, "end": 192, "text": "(Fierro et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "In our experiments, we replicated the implementation of Fierro et al. (2017) using pretrained fast-Text embeddings (we chose 50 dimensions) and word bigrams in the classification. Grid search considered learning rates of 1e \u2212 1, 5e \u2212 1 and 9e \u2212 1, and 5 or 10 epochs of training. Similar to Task A, classes were imbalanced in our datasets, and we thus trained models with and without undersampling.", "cite_spans": [ { "start": 56, "end": 76, "text": "Fierro et al. (2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "ECGA Further deep learning architectures have been considered by Guggilla et al. (2016) and Giannakopoulos et al. (2019) . While Guggilla et al. (2016) showed that the use of convolutional neuronal networks (CNN) (LeCun et al., 1998) can marginally improve the results of an SVM, the advantages of deep learning become more obvious in the work of Giannakopoulos et al. (2019) . Using an ensemble method called ECGA, a combination of multiple learners, they improved the results of Fierro et al. (2017) by 0.07. Each learner is composed of a CNN followed by bidirectional gated recurrent units (BiGRU) (Cho et al., 2014) , connected to an attention layer (Bahdanau et al., 2015) . The class predictions of the multiple learners are averaged to obtain final predictions. FastText embeddings build the input matrix. For argument classification, Giannakopoulos et al. (2019) proposed the use of two learners with kernel sizes of 2 and 3 as well as 512 filters in the convolution and 256 GRU units.", "cite_spans": [ { "start": 65, "end": 87, "text": "Guggilla et al. (2016)", "ref_id": "BIBREF13" }, { "start": 92, "end": 120, "text": "Giannakopoulos et al. (2019)", "ref_id": "BIBREF12" }, { "start": 129, "end": 151, "text": "Guggilla et al. (2016)", "ref_id": "BIBREF13" }, { "start": 213, "end": 233, "text": "(LeCun et al., 1998)", "ref_id": "BIBREF21" }, { "start": 347, "end": 375, "text": "Giannakopoulos et al. (2019)", "ref_id": "BIBREF12" }, { "start": 481, "end": 501, "text": "Fierro et al. (2017)", "ref_id": "BIBREF10" }, { "start": 601, "end": 619, "text": "(Cho et al., 2014)", "ref_id": "BIBREF4" }, { "start": 654, "end": 677, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF2" }, { "start": 842, "end": 870, "text": "Giannakopoulos et al. (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "Since the proposed architecture failed to produce reasonable results on our datasets, we reduced the number of GRU units in our experiments to 64 and the number of convolution filters in to 128. We took our cue from the authors' best model for solving a different task, textual churn detection, with a smaller corresponding dataset. Despite the re-duced model architecture, ECGA still tended to neglect the minority class in our datasets. To counteract this, we additionally evaluated ECGA with undersampling. We tried batch sizes of 2, 4, and 8, as well as 1 and 2 kernels or 2 and 3 kernels for the two learners. The training ran for 200 epochs with the option of early stopping if the loss did not improve within 10 epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "4.1.2" }, { "text": "Our second objective is to improve the results obtained by the previous approaches on our datasets for both classification tasks. To this end, we use BERT (Devlin et al., 2019) which has already provided promising results for Task A and Task B in other text domains, such as on persuasive online forums (Chakrabarty et al., 2019) and on heterogeneous sources of argumentative content (Reimers et al., 2019) . With public participation processes, BERT has so far only been used to identify relations between ADUs (Cocarascu et al., 2020) . We expected BERT to also perform well for Task A and Task B on public participation datasets and to outperform the other algorithms in the evaluation. We used case-sensitive German BERT 7 with an additional linear layer for sequence classification. For fine-tuning, we relied on the suggestions of Devlin et al. (2019) and included batch sizes of 16 and 32, learning rates of 5e \u2212 5, 3e \u2212 5 and 2e \u2212 5, and 1 to 4 epochs of training in the grid search.", "cite_spans": [ { "start": 155, "end": 176, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 303, "end": 329, "text": "(Chakrabarty et al., 2019)", "ref_id": "BIBREF3" }, { "start": 384, "end": 406, "text": "(Reimers et al., 2019)", "ref_id": "BIBREF32" }, { "start": 512, "end": 536, "text": "(Cocarascu et al., 2020)", "ref_id": "BIBREF5" }, { "start": 837, "end": 857, "text": "Devlin et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Bidirectional Encoder Representations from Transformers for Argument Mining in Public Participation Processes", "sec_num": "4.2" }, { "text": "This work's third objective is to investigate model generalizability in a cross-dataset evaluation. The previous two evaluation objectives were to determine which approach generates the best results for each dataset. To this end, both the training and the test data stem from the same dataset. In a practical application, this would mean that a sufficiently large amount of citizen contributions would have to be coded manually by local authorities. However, a more feasible and cost-effective solution would be to provide a pretrained classification model that can reliably recognize argument structures in new participation processes without the need for further training. Our goal is to provide such a model for public participation processes of mobility-related urban planning. The diversity in subjects and for-mats in our data corpus is well suited for testing the transferability to a range of processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Generalizability", "sec_num": "4.3" }, { "text": "For the cross-dataset evaluation, we used the evaluation setup described in Section 4.1 (5fold cross validation, hyperparameter tuning) and trained on CD_B in our experiments. We intentionally chose the largest dataset for training to provide reliable models. For every approach, we then applied the five resulting models to the remaining datasets and averaged the results for each dataset to obtain an average macro F 1 score. Algorithms were implemented as described in Sections 4.1 and 4.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Generalizability", "sec_num": "4.3" }, { "text": "For Task A, we evaluated SVM, fastText without undersampling (as will be shown in Section 5.1.1, undersampling of CD_B provided no advantage), and BERT. For Task B, we chose to evaluate models trained on undersampled data and models trained on the original data alongside. Our decision was due to the very different distribution of ADU-types in our datasets: while premises prevail in the cycling dialogues (62%-80% prem), major positions are more present in MC_K (59% mpos) and in CQ_B (80% mpos). We thus wanted to investigate whether models trained on balanced data could provide more stable results across the different datasets. To sum up, we compared the behavior of eight approaches in the cross-dataset evaluation for Task B: SVM, fastText, ECGA, and BERT trained on the original CD_B dataset, and trained on an undersampled CD_B dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Generalizability", "sec_num": "4.3" }, { "text": "In the following, we evaluate for both classification tasks the approaches from previous work (see Section 4.1) and BERT (see Section 4.2) on our corpus from Section 3. For completeness, we also have a look at the only other German public participation dataset for argument mining, THF Airport ArgMining Corpus (Liebeck et al., 2016) . THF provides 2, 078 argumentative and 355 non-argumentative sentences for Task A, and 509 major positions, 1, 170 premises, and 311 claims for Task B. 8", "cite_spans": [ { "start": 311, "end": 333, "text": "(Liebeck et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Approaches", "sec_num": "5.1" }, { "text": "Results for the classification of ADUs and non-ADUs are given in Table 2 . For each dataset, only the results of the superior fastText model are listed. 8 We evaluate the dataset according to our methodology instead of the suggested train-test split by Liebeck et al. (2016) . Undersampling models are marked with an asterisk. Overall, BERT performed best with macro F 1 values up to 0.80, improving most SVM scores by at least 0.03. 9 However, on THF the SVM yielded slightly better results. FastText struggled with the minority class. The problem was particularly evident in the three datasets with the fewest non-argumentative samples, where undersampling could improve the results at least to some degree. Table 3 shows the findings for argument component classification. For fastText and ECGA, two model variants were evaluated (with and without undersampling), of which the better one is listed. Undersampling models are marked with an asterisk. While undersampling slightly increased the macro performance of ECGA on all datasets, there was no enhancement with fastText. Contrary to our expectations, ECGA performed worse than fastText and could only keep up with the other approaches for datasets that have sufficient samples in the minority class. BERT showed outstanding results and could significantly advance the classification, especially for the minority classes: Compared to the also good SVM, the prediction of major positions (CD_B, CD_C, CD_M, THF) improved by at least 0.08 up to 0.17. Premises were predicted with an improvement of 0.09 and 0.18 (MC_K, CQ_B).", "cite_spans": [ { "start": 153, "end": 154, "text": "8", "ref_id": null }, { "start": 253, "end": 274, "text": "Liebeck et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 710, "end": 717, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Task A", "sec_num": "5.1.1" }, { "text": "Next, we look at the generalization performance of the learned models for both classification tasks. Figure 1 shows the cross-dataset results of the CD_B models on the other datasets. BERT could consistently achieve good macro F 1 values (between 0.75 and 0.79) for all datasets, close to the score of 0.76 that BERT achieved on the refence dataset CD_B (\u03c3 = 0.02). The obtained values are also comparable to the results of dataset-internal results from Section 5.1. Equally stable was fastText (\u03c3 = 0.02), but results were on average 0.10 points lower. SVM predictions varied more (\u03c3 = 0.04), especially when transferring to CQ_B and MC_K.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Cross-Dataset Evaluation", "sec_num": "5.2" }, { "text": "Results for the cross-dataset classification of argument components are presented in Figure 2 . Both BERT model variants generalized very well and achieved an average macro F 1 score of 0.90 across the different datasets. With \u03c3 = 0.01, the undersampling model predicted remarkably stable on our datasets (\u03c3 = 0.02 for the non-undersampling model). SVM, ECGA and fastText strongly benefited from balanced training data. With undersampling, the latter two approaches could surpass the in-dataset results from Section 5.1 and thus achieved best values for all datasets. SVM struggled with generalization on MC_K and CQ_B (\u03c3 = 0.03).", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 94, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Task B", "sec_num": "5.2.2" }, { "text": "Likewise fastText showed some weaknesses in generalization (\u03c3 = 0.03), which were particularly noticeable in the performance drop on CQ_B (0.76) compared to the reference value (0.84). ECGA achieved more uniform results with an average macro F 1 value of 0.83 (\u03c3 = 0.02), which, however, do not come close to the high values of BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "5.2.2" }, { "text": "It turned out that the models generalize surprisingly well across the different processes. In both tasks, BERT showed superior results, but other methods were also able to provide stable predictions across the different test datasets. This suggests that universally valid patterns of argument structures could be learned, generalizing to a very different data type (from deliberative online platforms to questionnaire data), as well as to a process with a more general topic (from specific cycling to a comprehensive mobility concept).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B", "sec_num": "5.2.2" }, { "text": "We investigated (A) the distinction of ADUs and non-ADUs and (B) the classification of major posi- Figure 2 : Cross-dataset evaluation for Task B. Results are averaged macro F 1 values of the five models trained on CD_B. (Note that in-dataset performance of CD_B with undersampling has not been reported in Table 3 ), tions and premises for German public participation processes from urban planning. For this purpose, we introduced a new data corpus comprising five diverse mobility-related processes. Our first objective was to identify previously published approaches to solving the two classification tasks on public participation processes and test their performance on our datasets. Among these works, SVM achieved the best results in both tasks. Our second objective was to improve the previous results. We proposed the use of BERT and successfully demonstrated that the results of both tasks improved. On our datasets, BERT yielded highly promising macro F 1 scores, between 0.76 and 0.80 for Task A and between 0.86 and 0.93 for Task B. We additionally showed, that our approach outperforms previous results for Task B on a similar German online participation dataset. We further argued, that the use of pretrained models is one way to make argument mining applicable in municipalities. Our third objective was to prove the feasibility for processes from urban planning that differ in topic or format. We showed that BERT models outperform the other approaches, achieving average macro F 1 values of 0.77 (\u03c3 = 0.02) for Task A and 0.90 (\u03c3 = 0.01) for Task B in the cross-dataset evaluation. Our results are very positive and show that practical support for municipalities in evaluating mobility-related public participation processes is within reach by providing pretrained models.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 2", "ref_id": null }, { "start": 307, "end": 314, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "In future work, we plan to investigate whether our best model can generalize to non-mobility public participation processes in urban planning to cover a broader range of topics. To further improve our models, we will concentrate on improving the detection of argumentative discourse units. Although we were able to achieve promising results, it has become apparent that distinguishing ADUs from non-ADUs is a particular challenge. Additionally, we will extend the classification for sentences that include multiple argument components (major position and premise) and address stance detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://scholarship.law.cornell.edu/ceri/ 2 https://www.turing.ac.uk/research/researchprojects/citizen-participation-and-machine-learning-betterdemocracy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The research project Citizen Involvement in Mobility Transitions (CIMT) has identified more than 350 processes directly related to mobility since 2015.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In urban planning, propositions usually refer to specific places. Maps are often used to provide assistance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "STTS tagset(Thielen and Schiller, 2011) 6 TIGER scheme(Albert et al., 2003)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.deepset.ai/german-bert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "BERT models show a high standard deviation in the minority classes of CD_C and CQ_B. Variance is due to the small number of non-arg sentences in the cross-validation for hyperparameter tuning. Fixed hyperparameters yield comparable F1 values and much lower standard deviation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are especially grateful to Tobias Escher, who provided significant support for this work through funding acquisition, the provision of data resources and comments to the manuscript. We thank him, Katharina Huselji\u0107, Matthias Liebeck and Laura Mark for valuable discussions and feedback on earlier forms of this work. We thank the anonymous reviewers for their helpful comments. This publication is based on research in the project CIMT/Partizipationsnutzen, which is funded by the Federal Ministry of Education and Research as part of its Social-Ecological Research funding priority, funding no. 01UU1904. Responsibility for the content of this publication lies with the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Yulan He, Arkaitz Zubiaga, and Maria Liakata. 2021. Citizen participation and machine learning for a better democracy", "authors": [ { "first": "Miguel", "middle": [], "last": "Arana-Catania", "suffix": "" }, { "first": "Felix-Anselm", "middle": [], "last": "Van Lier", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Nataliya", "middle": [], "last": "Tkachenko", "suffix": "" } ], "year": null, "venue": "Digit. Gov.: Res. Pract", "volume": "2", "issue": "3", "pages": "", "other_ids": { "DOI": [ "10.1145/3452118" ] }, "num": null, "urls": [], "raw_text": "Miguel Arana-Catania, Felix-Anselm Van Lier, Rob Procter, Nataliya Tkachenko, Yulan He, Arkaitz Zu- biaga, and Maria Liakata. 2021. Citizen participa- tion and machine learning for a better democracy. Digit. Gov.: Res. Pract., 2(3).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyung", "middle": [ "Hyun" ], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd Inter- national Conference on Learning Representations.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "AMPERSAND: Argument Mining for PER-SuAsive oNline Discussions", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2933--2943", "other_ids": { "DOI": [ "10.18653/v1/D19-1291" ] }, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: Argument Mining for PER- SuAsive oNline Discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2933-2943.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations using RNN Encoder- Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dataset independent baselines for relation prediction in argument mining", "authors": [ { "first": "Oana", "middle": [], "last": "Cocarascu", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2020, "venue": "Computational Models of Argument -Proceedings of COMMA 2020", "volume": "", "issue": "", "pages": "45--52", "other_ids": { "DOI": [ "10.3233/FAIA200490" ] }, "num": null, "urls": [], "raw_text": "Oana Cocarascu, Elena Cabrio, Serena Villata, and Francesca Toni. 2020. Dataset independent base- lines for relation prediction in argument mining. In Computational Models of Argument -Proceedings of COMMA 2020, pages 45-52.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Support-Vector Networks", "authors": [ { "first": "Corinna", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine Learning", "volume": "20", "issue": "", "pages": "273--297", "other_ids": { "DOI": [ "10.1007/BF00994018" ] }, "num": null, "urls": [], "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- Vector Networks. Machine Learning, 20:273-297.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Argument Identification in Public Comments from eRulemaking", "authors": [ { "first": "Vlad", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Grom", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law", "volume": "", "issue": "", "pages": "199--203", "other_ids": { "DOI": [ "10.1145/3322640.3326714" ] }, "num": null, "urls": [], "raw_text": "Vlad Eidelman and Brian Grom. 2019. Argument Identification in Public Comments from eRulemak- ing. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, pages 199-203.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Will citizens take no for an answer? What government officials can do to enhance decision acceptance", "authors": [ { "first": "", "middle": [], "last": "Peter Esaiasson", "suffix": "" } ], "year": 2010, "venue": "European Political Science Review", "volume": "2", "issue": "3", "pages": "351--371", "other_ids": { "DOI": [ "10.1017/S1755773910000238" ] }, "num": null, "urls": [], "raw_text": "Peter Esaiasson. 2010. Will citizens take no for an an- swer? What government officials can do to enhance decision acceptance. European Political Science Re- view, 2(3):351-371.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Crowdsourced Political Arguments for a New Chilean Constitution", "authors": [ { "first": "Constanza", "middle": [], "last": "Fierro", "suffix": "" }, { "first": "Claudio", "middle": [], "last": "Fuentes", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "P\u00e9rez", "suffix": "" }, { "first": "Mauricio", "middle": [], "last": "Quezada", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W17-5101" ] }, "num": null, "urls": [], "raw_text": "Constanza Fierro, Claudio Fuentes, Jorge P\u00e9rez, and Mauricio Quezada. 2017. 200K+ Crowdsourced Po- litical Arguments for a New Chilean Constitution. In Proceedings of the 4th Workshop on Argument Mining, pages 1-10.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Argumentative Link Prediction using Residual Networks and Multi-Objective Learning", "authors": [ { "first": "Andrea", "middle": [], "last": "Galassi", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lippi", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Torroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 5th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W18-5201" ] }, "num": null, "urls": [], "raw_text": "Andrea Galassi, Marco Lippi, and Paolo Torroni. 2018. Argumentative Link Prediction using Residual Net- works and Multi-Objective Learning. In Proceed- ings of the 5th Workshop on Argument Mining, pages 1-10.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Resilient Combination of Complementary CNN and RNN Features for Text Classification through Attention and Ensembling", "authors": [ { "first": "Athanasios", "middle": [], "last": "Giannakopoulos", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Coriou", "suffix": "" }, { "first": "Andreea", "middle": [], "last": "Hossmann", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Baeriswyl", "suffix": "" }, { "first": "Claudiu", "middle": [], "last": "Musat", "suffix": "" } ], "year": 2019, "venue": "6th Swiss Conference on Data Science (SDS)", "volume": "", "issue": "", "pages": "57--62", "other_ids": { "DOI": [ "10.1109/SDS.2019.000-7" ] }, "num": null, "urls": [], "raw_text": "Athanasios Giannakopoulos, Maxime Coriou, Andreea Hossmann, Michael Baeriswyl, and Claudiu Musat. 2019. Resilient Combination of Complementary CNN and RNN Features for Text Classification through Attention and Ensembling. In 6th Swiss Conference on Data Science (SDS), pages 57-62.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "CNN-and LSTM-based Claim Classification in Online User Comments", "authors": [ { "first": "Chinnappa", "middle": [], "last": "Guggilla", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2740--2751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinnappa Guggilla, Tristan Miller, and Iryna Gurevych. 2016. CNN-and LSTM-based Claim Classification in Online User Comments. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 2740-2751.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bag of Tricks for Efficient Text Classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of Tricks for Efficient Text Classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Corpus of Argument Networks: Using Graph Properties to Analyse Divisive Issues", "authors": [ { "first": "Barbara", "middle": [], "last": "Konat", "suffix": "" }, { "first": "John", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Katarzyna", "middle": [], "last": "Budzynska", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", "volume": "", "issue": "", "pages": "3899--3906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Konat, John Lawrence, Joonsuk Park, Katarzyna Budzynska, and Chris Reed. 2016. A Corpus of Argument Networks: Using Graph Prop- erties to Analyse Divisive Issues. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC 2016), pages 3899- 3906.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Information Acquisition using Multiple Classifications", "authors": [ { "first": "Namhee", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Conference on Knowledge Capture", "volume": "", "issue": "", "pages": "111--118", "other_ids": { "DOI": [ "10.1145/1298406.1298427" ] }, "num": null, "urls": [], "raw_text": "Namhee Kwon and Eduard Hovy. 2007. Informa- tion Acquisition using Multiple Classifications. In Proceedings of the 4th International Conference on Knowledge Capture, pages 111-118.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multidimensional Text Analysis for eRulemaking", "authors": [ { "first": "Namhee", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Stuart", "middle": [ "W" ], "last": "Shulman", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 International Conference on Digital Government Research", "volume": "", "issue": "", "pages": "157--166", "other_ids": { "DOI": [ "10.1145/1146598.1146649" ] }, "num": null, "urls": [], "raw_text": "Namhee Kwon, Stuart W. Shulman, and Eduard Hovy. 2006. Multidimensional Text Analysis for eRule- making. In Proceedings of the 2006 International Conference on Digital Government Research, pages 157-166.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Identifying and Classifying Subjective Claims", "authors": [ { "first": "Namhee", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Stuart", "middle": [ "W" ], "last": "Shulman", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 8th Annual International Conference on Digital Government Research: Bridging Disciplines & Domains", "volume": "", "issue": "", "pages": "76--81", "other_ids": { "DOI": [ "https://dl.acm.org/doi/pdf/10.5555/1248460.1248473" ] }, "num": null, "urls": [], "raw_text": "Namhee Kwon, Liang Zhou, Eduard Hovy, and Stu- art W. Shulman. 2007. Identifying and Classifying Subjective Claims. In Proceedings of the 8th Annual International Conference on Digital Government Re- search: Bridging Disciplines & Domains, pages 76- 81.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using Argumentative Structure to Interpret Debates in Online Deliberative Democracy and eRulemaking", "authors": [ { "first": "John", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Katarzyna", "middle": [], "last": "Budzynska", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Konat", "suffix": "" }, { "first": "Chris", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "ACM Transactions on Internet Technology", "volume": "17", "issue": "3", "pages": "1--22", "other_ids": { "DOI": [ "10.1145/3032989" ] }, "num": null, "urls": [], "raw_text": "John Lawrence, Joonsuk Park, Katarzyna Budzynska, Claire Cardie, Barbara Konat, and Chris Reed. 2017. Using Argumentative Structure to Interpret Debates in Online Deliberative Democracy and eRulemaking. ACM Transactions on Internet Technology, 17(3):1- 22.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Argument Mining: A Survey", "authors": [ { "first": "John", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" } ], "year": 2019, "venue": "Computational Linguistics", "volume": "45", "issue": "4", "pages": "765--818", "other_ids": { "DOI": [ "10.1162/coli_a_00364" ] }, "num": null, "urls": [], "raw_text": "John Lawrence and Chris Reed. 2019. Argument Mining: A Survey. Computational Linguistics, 45(4):765-818.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Gradient-Based Learning Applied to Document Recognition", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Haffner", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the IEEE", "volume": "86", "issue": "11", "pages": "2278--2324", "other_ids": { "DOI": [ "10.1109/5.726791" ] }, "num": null, "urls": [], "raw_text": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278-2324.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "What to Do with an Airport? Mining Arguments in the German Online Participation Project Tempelhofer Feld", "authors": [ { "first": "Matthias", "middle": [], "last": "Liebeck", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Esau", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Conrad", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 3rd Workshop on Argument Mining", "volume": "", "issue": "", "pages": "144--153", "other_ids": { "DOI": [ "10.18653/v1/W16-2817" ] }, "num": null, "urls": [], "raw_text": "Matthias Liebeck, Katharina Esau, and Stefan Conrad. 2016. What to Do with an Airport? Mining Ar- guments in the German Online Participation Project Tempelhofer Feld. In Proceedings of the 3rd Work- shop on Argument Mining, pages 144-153.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Should Mass Comments Count? Mich", "authors": [ { "first": "Nina", "middle": [ "A" ], "last": "Mendelson", "suffix": "" } ], "year": 2012, "venue": "J. Envtl. & Admin. L", "volume": "2", "issue": "", "pages": "173--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina A. Mendelson. 2012. Should Mass Comments Count? Mich. J. Envtl. & Admin. L. 2, 2:173-183.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Annotating Online Civic Discussion Threads for Argument Mining", "authors": [ { "first": "Gaku", "middle": [], "last": "Morio", "suffix": "" }, { "first": "Katsuhide", "middle": [], "last": "Fujita", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)", "volume": "", "issue": "", "pages": "546--553", "other_ids": { "DOI": [ "10.1109/WI.2018.00-39" ] }, "num": null, "urls": [], "raw_text": "Gaku Morio and Katsuhide Fujita. 2018a. Annotating Online Civic Discussion Threads for Argument Min- ing. In 2018 IEEE/WIC/ACM International Confer- ence on Web Intelligence (WI), pages 546-553.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "End-to-End Argument Mining for Discussion Threads Based on Parallel Constrained Pointer Architecture", "authors": [ { "first": "Gaku", "middle": [], "last": "Morio", "suffix": "" }, { "first": "Katsuhide", "middle": [], "last": "Fujita", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 5th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "11--21", "other_ids": { "DOI": [ "10.18653/v1/W18-5202" ] }, "num": null, "urls": [], "raw_text": "Gaku Morio and Katsuhide Fujita. 2018b. End-to-End Argument Mining for Discussion Threads Based on Parallel Constrained Pointer Architecture. In Pro- ceedings of the 5th Workshop on Argument Mining, pages 11-21.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Argument Mining with Structured SVMs and RNNs", "authors": [ { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "985--995", "other_ids": { "DOI": [ "10.18653/v1/P17-1091" ] }, "num": null, "urls": [], "raw_text": "Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument Mining with Structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 985-995.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Promise and Problems of E-Democracy", "authors": [ { "first": "", "middle": [], "last": "Oecd", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "OECD. 2004. Promise and Problems of E-Democracy. OECD Publishing.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Identifying Appropriate Support for Propositions in Online User Comments", "authors": [ { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "29--38", "other_ids": { "DOI": [ "10.3115/v1/W14-2105" ] }, "num": null, "urls": [], "raw_text": "Joonsuk Park and Claire Cardie. 2014. Identifying Ap- propriate Support for Propositions in Online User Comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29-38.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conditional Random Fields for Identifying Appropriate Types of Support for Propositions in Online User Comments", "authors": [ { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Arzoo", "middle": [], "last": "Katiyar", "suffix": "" }, { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2nd Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "39--44", "other_ids": { "DOI": [ "10.3115/v1/W15-0506" ] }, "num": null, "urls": [], "raw_text": "Joonsuk Park, Arzoo Katiyar, and Bishan Yang. 2015. Conditional Random Fields for Identifying Appro- priate Types of Support for Propositions in Online User Comments. In Proceedings of the 2nd Work- shop on Argumentation Mining, pages 39-44.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "From Argument Diagrams to Argumentation Mining in Texts: A Survey", "authors": [ { "first": "Andreas", "middle": [], "last": "Peldszus", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2013, "venue": "International Journal of Cognitive Informatics and Natural Intelligence (IJCINI)", "volume": "7", "issue": "1", "pages": "1--31", "other_ids": { "DOI": [ "10.4018/jcini.2013010101" ] }, "num": null, "urls": [], "raw_text": "Andreas Peldszus and Manfred Stede. 2013. From Argument Diagrams to Argumentation Mining in Texts: A Survey. International Journal of Cogni- tive Informatics and Natural Intelligence (IJCINI), 7(1):1-31.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Active Learning for e-Rulemaking: Public Comment Categorization", "authors": [ { "first": "Stephen", "middle": [], "last": "Purpura", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Simons", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 International Conference on Digital Government Research", "volume": "", "issue": "", "pages": "234--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Purpura, Claire Cardie, and Jesse Simons. 2008. Active Learning for e-Rulemaking: Public Comment Categorization. In Proceedings of the 2008 International Conference on Digital Govern- ment Research, pages 234-243.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Classification and Clustering of Arguments with Contextualized Word Embeddings", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Tilman", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "567--578", "other_ids": { "DOI": [ "10.18653/v1/P19-1054" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and Clustering of Arguments with Contextualized Word Embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 567- 578.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Analyse der Anforderungen an eine Software zur (teil-)automatisierten Unterst\u00fctzung bei der Auswertung von Beteiligungsverfahren", "authors": [ { "first": "Julia", "middle": [], "last": "Romberg", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Escher", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Romberg and Tobias Escher. 2020. Anal- yse der Anforderungen an eine Software zur (teil- )automatisierten Unterst\u00fctzung bei der Auswertung von Beteiligungsverfahren. Working Paper 1, CIMT Research Group, Institute for Social Sciences, Hein- rich Heine University D\u00fcsseldorf.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Governing in Europe: Effective and Democratic", "authors": [ { "first": "Fritz", "middle": [ "W" ], "last": "Scharpf", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fritz W. Scharpf. 1999. Governing in Europe: Effec- tive and Democratic? Oxford: Oxford University Press.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Democracy and E-Rulemaking: Web-Based Technologies, Participation, and the Potential for Deliberation", "authors": [ { "first": "David", "middle": [], "last": "Schlosberg", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Zavestoski", "suffix": "" }, { "first": "Stuart", "middle": [ "W" ], "last": "Shulman", "suffix": "" } ], "year": 2008, "venue": "Journal of Information Technology & Politics", "volume": "4", "issue": "1", "pages": "37--55", "other_ids": { "DOI": [ "10.1300/J516v04n01_04" ] }, "num": null, "urls": [], "raw_text": "David Schlosberg, Stephen Zavestoski, and Stuart W. Shulman. 2008. Democracy and E-Rulemaking: Web-Based Technologies, Participation, and the Po- tential for Deliberation. Journal of Information Technology & Politics, 4(1):37-55.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "An experiment in digital government at the United States National Organic Program", "authors": [ { "first": "W", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "", "middle": [], "last": "Shulman", "suffix": "" } ], "year": 2003, "venue": "Agriculture and Human Values", "volume": "20", "issue": "", "pages": "253--265", "other_ids": { "DOI": [ "10.1023/A:1026104815057" ] }, "num": null, "urls": [], "raw_text": "Stuart W. Shulman. 2003. An experiment in digital government at the United States National Organic Program. Agriculture and Human Values, 20:253- 265.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Automated Analysis of e-Participation Data by Utilizing Associative Networks, Spreading Activation and Unsupervised Learning", "authors": [ { "first": "Peter", "middle": [], "last": "Teufl", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Payer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Parycek", "suffix": "" } ], "year": 2009, "venue": "International Conference on Electronic Participation", "volume": "", "issue": "", "pages": "139--150", "other_ids": { "DOI": [ "10.1007/978-3-642-03781-8_13" ] }, "num": null, "urls": [], "raw_text": "Peter Teufl, Udo Payer, and Peter Parycek. 2009. Auto- mated Analysis of e-Participation Data by Utilizing Associative Networks, Spreading Activation and Un- supervised Learning. In International Conference on Electronic Participation, pages 139-150.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Ein kleines und erweitertes Tagset f\u00fcrs Deutsche", "authors": [ { "first": "Christine", "middle": [], "last": "Thielen", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Schiller", "suffix": "" } ], "year": 2011, "venue": "Lexikon und Text: Wiederverwendbare Methoden und Ressourcen zur linguistischen Erschlie\u00dfung des Deutschen", "volume": "", "issue": "", "pages": "193--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christine Thielen and Anne Schiller. 2011. Ein kleines und erweitertes Tagset f\u00fcrs Deutsche. In Lexikon und Text: Wiederverwendbare Methoden und Ressourcen zur linguistischen Erschlie\u00dfung des Deutschen, pages 193-204. Max Niemeyer Verlag.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Next Steps in Near-Duplicate Detection for eRulemaking", "authors": [ { "first": "Hui", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Callan", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shulman", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 International Conference on Digital Government Research", "volume": "", "issue": "", "pages": "239--248", "other_ids": { "DOI": [ "10.1145/1146598.1146663" ] }, "num": null, "urls": [], "raw_text": "Hui Yang, Jamie Callan, and Stuart Shulman. 2006. Next Steps in Near-Duplicate Detection for eRule- making. In Proceedings of the 2006 International Conference on Digital Government Research, pages 239-248.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Cross-dataset evaluation for Task A. Results are averaged macro F 1 values of the five models trained on CD_B.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "model performance (cf.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "text": "1, 153 (11.3%) 197 (11.9%) 382 (17.8%) 431 (22.2%) 172 (12.4%) mpos 2, 589 (25.4%) 556 (33.6%) 359 (16.7%) 892 (46.0%) 960 (69.5%) prem 6, 438 (63.2%) 904 (54.6%) 1, 407 (65.5%) 616 (31.8%) 250 (18.1%)", "html": null, "content": "
CD_BCD_CCD_MMC_KCQ_B
non-arg total10, 1801, 6572, 1481, 9391, 382
", "type_str": "table" }, "TABREF1": { "num": null, "text": "Distribution of sentences among the different coding categories per dataset (absolute and percentage).", "html": null, "content": "", "type_str": "table" }, "TABREF3": { "num": null, "text": "Results for Task A on the individual datasets. Scores are mean F 1 values of the five test sets, standard deviation is given in parentheses.", "html": null, "content": "
", "type_str": "table" }, "TABREF5": { "num": null, "text": "Results for Task B on the individual datasets. Scores are mean F 1 values of the five test sets, standard deviation is given in parentheses.", "html": null, "content": "
", "type_str": "table" }, "TABREF7": { "num": null, "text": "", "html": null, "content": "
BERT undersampling
1
0.90.910.910.900.880.91
0.8
) Cross-dataset model performance0.7In-dataset model performance Cross-dataset model performance
0.6
CD BCD CCD MMC KCQ B
", "type_str": "table" } } } }