{ "paper_id": "W12-0302", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:15:43.223548Z" }, "title": "From Drafting Guideline to Error Detection: Automating Style Checking for Legislative Texts", "authors": [ { "first": "Stefan", "middle": [], "last": "H\u00f6fler", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Zurich", "location": { "addrLine": "14", "postCode": "8050", "settlement": "Z\u00fcrich", "country": "Switzerland" } }, "email": "hoefler@cl.uzh.ch" }, { "first": "Kyoko", "middle": [], "last": "Sugisaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Zurich", "location": { "addrLine": "14", "postCode": "8050", "settlement": "Z\u00fcrich", "country": "Switzerland" } }, "email": "sugisaki@cl.uzh.ch" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper reports on the development of methods for the automated detection of violations of style guidelines for legislative texts, and their implementation in a prototypical tool. To this aim, the approach of error modelling employed in automated style checkers for technical writing is enhanced to meet the requirements of legislative editing. The paper identifies and discusses the two main sets of challenges that have to be tackled in this process: (i) the provision of domain-specific NLP methods for legislative drafts, and (ii) the concretisation of guidelines for legislative drafting so that they can be assessed by machine. The project focuses on German-language legislative drafting in Switzerland.", "pdf_parse": { "paper_id": "W12-0302", "_pdf_hash": "", "abstract": [ { "text": "This paper reports on the development of methods for the automated detection of violations of style guidelines for legislative texts, and their implementation in a prototypical tool. To this aim, the approach of error modelling employed in automated style checkers for technical writing is enhanced to meet the requirements of legislative editing. The paper identifies and discusses the two main sets of challenges that have to be tackled in this process: (i) the provision of domain-specific NLP methods for legislative drafts, and (ii) the concretisation of guidelines for legislative drafting so that they can be assessed by machine. The project focuses on German-language legislative drafting in Switzerland.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper reports on work in progress that is aimed at providing domain-specific automated style checking to support German-language legislative editing in the Swiss federal administration. In the federal administration of the Swiss Confederation, drafts of new acts and ordinances go through several editorial cycles. In a majority of cases, they are originally written by civil servants in one of the federal offices concerned, and then reviewed and edited both by legal experts (at the Federal Office of Justice) and language experts (at the Federal Chancellery). While the former ensure that the drafts meet all relevant legal requirements, the latter are concerned with the formal and linguistic quality of the texts. To help this task, the authorities have drawn up style guidelines specifically geared towards Swiss legislative texts (Bundeskanzlei, 2003; Bundesamt f\u00fcr Justiz, 2007) .", "cite_spans": [ { "start": 842, "end": 863, "text": "(Bundeskanzlei, 2003;", "ref_id": "BIBREF1" }, { "start": 864, "end": 891, "text": "Bundesamt f\u00fcr Justiz, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Style guidelines for laws (and other types of legal texts) may serve three main purposes: (i) improving the understandability of the texts (Lerch, 2004; Wydick, 2005; Mindlin, 2005; Butt and Castle, 2006; Eichhoff-Cyrus and Antos, 2008) , (ii) enforcing their consistency with related texts, and (iii) facilitating their translatability into other languages. These aims are shared with writing guidelines developed for controlled languages in the domain of technical documentation (Lehrndorfer, 1996; Reuther, 2003; Muegge, 2007) .", "cite_spans": [ { "start": 139, "end": 152, "text": "(Lerch, 2004;", "ref_id": null }, { "start": 153, "end": 166, "text": "Wydick, 2005;", "ref_id": "BIBREF24" }, { "start": 167, "end": 181, "text": "Mindlin, 2005;", "ref_id": "BIBREF14" }, { "start": 182, "end": 204, "text": "Butt and Castle, 2006;", "ref_id": "BIBREF3" }, { "start": 205, "end": 236, "text": "Eichhoff-Cyrus and Antos, 2008)", "ref_id": null }, { "start": 481, "end": 500, "text": "(Lehrndorfer, 1996;", "ref_id": "BIBREF12" }, { "start": 501, "end": 515, "text": "Reuther, 2003;", "ref_id": "BIBREF19" }, { "start": 516, "end": 529, "text": "Muegge, 2007)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem is that the manual assessment of draft laws for their compliance with all relevant style guidelines is time-consuming and easily inconsistent due to the number of authors and editors involved in the drafting process. The aim of the work presented in this paper is to facilitate this process by providing methods for a consistent automatic identification of some specific guideline violations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is organised as follows. We first delineate the aim and scope of the project presented in the paper (section 2) and the approach we are pursuing (section 3). In the main part of the paper, we then identify and discuss the two main challenges that have to be tackled: the technical challenge of providing NLP methods for legislative drafts (section 4) and the linguistic challenge of concretising the existing drafting guidelines for legislative texts (section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aim of the project to be presented in this paper is to develop methods of automated style checking specifically geared towards legislative editing, and to implement these methods in a prototypical tool (cf. sections 3 and 4). We work towards automat- ically detecting violations of existing guidelines, and where these guidelines are very abstract, we concretise them so that they become detectable by machine (cf. section 5). However, it is explicitly not the goal of our project to propose novel style rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aim and Scope", "sec_num": "2" }, { "text": "We have adopted a broad conception of \"style checking\" that is roughly equivalent to how the term, and its variant \"controlled language checking,\" have been used in the context of technical writing (Geldbach, 2009) . It comprises the assessment of various aspects of text composition controlled by specific writing guidelines (typographical conventions, lexical preferences, syntax-related recommendations, constraints on discourse and document structure), but it does not include the evaluation of spelling and grammar.", "cite_spans": [ { "start": 198, "end": 214, "text": "(Geldbach, 2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Aim and Scope", "sec_num": "2" }, { "text": "While our project focuses on style checking for German-language Swiss federal laws (the federal constitution, acts of parliament, ordinances, federal decrees, cantonal constitutions), we believe that the challenges arising from the task are independent of the chosen language and legislative system but pertain to the domain in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aim and Scope", "sec_num": "2" }, { "text": "The most important innovative contribution of our project is the enhancement of the method of error modelling to meet the requirements of legislative editing. Error modelling means that texts are searched for specific features that indicate a style guideline violation: the forms of specific \"errors\" are thus anticipated and modelled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "The method of error modelling has mainly been developed for automated style checking in the domain of technical writing. Companies often con-trol the language used in their technical documentation in order to improve the understandability, readability and translatability of these texts. Controlled language checkers are tools that evaluate input texts for compliance with such style guidelines set up by a company. 1 State-of-the-art controlled language checkers work along the following lines. In a pre-processing step, they first perform an automatic analysis of the input text (tokenisation, text segmentation, morphological analysis, part-of-speech tagging, parsing) and enrich it with the respective structural and linguistic information. They then apply a number of pre-defined rules that model potential \"errors\" (i.e. violations of individual style guidelines) and aim at detecting them in the analysed text. Most checkers give their users the option to choose which rules the input text is to be checked for. Once a violation of the company's style guidelines has been detected, the respective passage is highlighted and an appropriate help text is made available to the user (e.g. as a comment in the original document or in an extra document generated by the system). The system we are working on is constructed along the same lines; its architecture is outlined in Fig. 1 .", "cite_spans": [ { "start": 416, "end": 417, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 1378, "end": 1384, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Transferring the described method to the domain of legislative editing has posed challenges to both pre-processing and error modelling. The peculiarities of legal language and legislative texts have necessitated a range of adaptations in the NLP procedures devised, and the guidelines for legislative drafting have required highly domain-specific error modelling, which needed to be backed up by substantial linguistic research. We will detail these two sets of challenges in the following two sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "4 Pre-Processing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "The legislative drafters and editors we are targeting exclusively work with MS Word documents. Drafters compose the texts in Word, and legislative editors use the commenting function of Word to add their suggestions and corrections to the texts they receive. We make use of the XML representation (WordML) underlying these documents. In a first step, we tokenise the text contained therein and assign each token an ID directly in the WordML structure. We then extract the text material (including the token IDs and some formatting information that proves useful in the processing steps to follow) for further processing. The token IDs are used again at the end of the style checking process when discovered styleguide violations are highlighted by inserting a Word comment at the respective position in the WordML representation of the original document. The output of our style checker is thus equivalent to how legislative editors make their annotations to the drafts -a fact that proves essential with regard to the tool being accepted by its target users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tokenisation", "sec_num": "4.1" }, { "text": "After tokenisation, the input text is then segmented into its structural units. Legislative texts exhibit a sophisticated domain-specific structure. Our text segmentation tool detects the boundaries of chapters, sections, articles, paragraphs, sentences and enumeration elements, and marks them by adding corresponding XML tags to the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "There are three reasons why text segmentation is crucial to our endeavour: 1. Proper text segmentation ensures that only relevant token spans are passed on to further processing routines (e.g. sentences contained in articles must to be passed on to the parser, whereas article numbers or section headings must not).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "2. Most structural units are themselves the object of style rules (e.g. \"sections should not contain more than twelve articles, articles should not contain more than three paragraphs and paragraphs should not contain more than one sentence\"). The successful detection of violations of such rules depends on the correct delimitation of the respective structural units in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "3. Certain structural units constitute the context for other style rules (e.g. \"the sentence right before the first element of an enumeration has to end in a colon\"; \"the antecedent of a pronoun must be within the same article\"). Here too, correct text segmentation constitutes the prerequisite for an automated assessment of the respective style rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "We have devised a line-based pattern-matching algorithm with look-around to detect the boundaries of the structural units of legislative drafts (H\u00f6fler and Piotrowski, 2011) . The algorithm also exploits formatting information extracted together with the text from the Word documents. However, not all formatting information has proven equally reliable: as the Word documents in which the drafts are composed do only make use of style environments to a very limited extent, formatting errors are relatively frequent. Font properties such as italics or bold face, or the use of list environments are frequently erroneous and can thus not be exploited for the purpose of delimiting text segments; headers and newline information, on the other hand, have proven relatively reliable. Figure 2 illustrates the annotation that our tool yields for the excerpt shown in the following example:", "cite_spans": [ { "start": 144, "end": 173, "text": "(H\u00f6fler and Piotrowski, 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 780, "end": 788, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "(1) Art. 14 Amtsenthebung 2 Die Wahlbeh\u00f6rde kann eine Richterin oder einen Richter vor Ablauf der Amtsdauer des Amtes entheben, wenn diese oder dieser:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "a. vors\u00e4tzlich oder grobfahrl\u00e4ssig Amtspflichten schwer verletzt hat; oder b. die F\u00e4higkeit, das Amt auszu\u00fcben, auf Dauer verloren hat.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Segmentation", "sec_num": "4.2" }, { "text": "The electoral authorities may remove a judge from office before he or she has completed his or her term where he or she:
Art. 14 Amtsenthebung Die Wahlbeh\u00f6rde kann eine Richterin oder einen Richter vor Ablauf der Amtsdauer des Amtes entheben, wenn diese oder dieser: a. vors\u00e4tzlich oder grobfahrl\u00e4ssig Amtspflichten schwer verletzt hat; oder b. die F\u00e4higkeit, das Amt auszu\u00fcben, auf Dauer verloren hat.
a. wilfully or through gross negligence commits serious breaches of his or her official duties; or b. has permanently lost the ability to perform his or her official duties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Art. 14 Removal from office", "sec_num": null }, { "text": "As our methods must be robust in the face of input texts that are potentially erroneous, the text segmentation provided by our tool does not amount to a complete document parsing; our text segmentation routine rather performs a document chunking by trying to detect as many structural units as possible. Another challenge that arises from the fact that the input texts may be erroneous is that features whose absence we later need to mark as an error cannot be exploited for the purpose of detecting the boundaries of the respective contextual unit. A colon, for instance, cannot be used as an indicator for the beginning of an enumeration since we must later be able to search for enumerations that are not preceded by a sentence ending in a colon as this constitutes a violation of the respective style rule. Had the colon been used as an indicator for the detection of enumeration boundaries, only enumerations preceded by a colon would have been marked as such in the first place. The development of adequate pre-processing methods constantly faces such dilemmas. It is thus necessary to always anticipate the specific guideline violations that one later wants to detect on the basis of the information added by any individual pre-processing routine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Art. 14 Removal from office", "sec_num": null }, { "text": "Special challenges also arise with regard to the task of sentence boundary detection. Legislative texts contain special syntactic structures that offthe-shelf tools cannot process and that therefore need special treatment. Example (1) showed a sentence that runs throughout a whole enumeration; colon and semicolons do not mark sentence boundaries in this case. To complicate matters even further, parenthetical sentences may be inserted behind individual enumeration items, as shown in example (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Art. 14 Removal from office", "sec_num": null }, { "text": "(2) Art. In this example, a parenthetical sentence (marked in bold face) has been inserted at the end of the first enumeration item. A full stop has been put where the main sentence is interrupted, whereas the inserted sentence is ended with a semicolon to indicate that after it, the main sentence is continued. The recognition of sentential insertions as the one shown in (2) is important for two reasons: (i) sentential parentheses are themselves the object of style rules (in general, they are to be avoided) and should thus be marked by a style checker, and (ii) a successful parsing of the texts depends on a proper recognition of the sentence boundaries. As off-the-shelf tools cannot cope with such domainspecific structures, we have had to devise highly specialised algorithms for sentence boundary detection in our texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Art. 14 Removal from office", "sec_num": null }, { "text": "Following text segmentation, we perform a linguistic analysis of the input text which consists of three components: part-of-speech tagging, lemmatisation and chunking/parsing. The information added by these pre-processing steps is later used in the detection of violations of style rules that pertain to the use of specific terms (e.g. \"the modal sollen 'should' is to be avoided\"), syntactic constructions (e.g. \"complex participial constructions preceding a noun should be avoided\") or combinations thereof (e.g. \"obligations where the subject is an authority must be put as assertions and not contain a modal verb\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "For the tasks of part-of-speech tagging and lemmatisation, we employ TreeTagger (Schmid, 1994) . We have adapted TreeTagger to the peculiarities of Swiss legislative language. Domain-specific token types are pre-tagged in a special routine to avoid erroneous part-of-speech analyses. An example of a type of tokens that needs pre-tagging are domain-specific cardinal numbers: i.e. cardinal numbers augmented with letters (Article 2a) or with Latin ordinals (Paragraph 4bis) as well as ranges of such cardinal numbers (Articles 3c-6). Furthermore, TreeTagger's recognition of sentence boundaries is overwritten by the output of our text segmentation routine. We have also augmented TreeTagger's domain-general list of abbreviations with a list of domain-specific abbreviations and acronyms provided by the Swiss Federal Chancellery. The lemmatisation provided by TreeTagger usually does not recognise complex compound nouns (e.g. G\u00fcterverkehrsverlagerung 'freight traffic transfer'); such compound nouns are frequent in legislative texts (Nussbaumer, 2009) . To solve the problem, we combine the output of TreeTagger's part-of-speech tagging with the lemma information delivered by the morphology analysis tool GERTWOL (Haapalainen and Majorin, 1995) .", "cite_spans": [ { "start": 80, "end": 94, "text": "(Schmid, 1994)", "ref_id": "BIBREF20" }, { "start": 1037, "end": 1055, "text": "(Nussbaumer, 2009)", "ref_id": "BIBREF16" }, { "start": 1218, "end": 1249, "text": "(Haapalainen and Majorin, 1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "Some detection tasks (e.g. the detection of legal definitions discussed in section 4.4 below) additionally require chunking or even parsing. For chunking, we also employ TreeTagger; for parsing, we have begun to adapt ParZu to legislative language, a robust state-of-art dependency parser (Sennrich et al., 2009) . Like most off-the-shelf parsers, ParZu was trained on a corpus of newspaper articles. As a consequence, it struggles with analysing constructions that are rare in that domain but frequent in legislative texts, such as complex coordinations of prepositional phrases and PP-attachment chains (Venturi, 2008) , parentheses (as illustrated in example 2 above) or subject clauses (as shown in example 3 below).", "cite_spans": [ { "start": 289, "end": 312, "text": "(Sennrich et al., 2009)", "ref_id": "BIBREF21" }, { "start": 605, "end": 620, "text": "(Venturi, 2008)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "(3) Art. 17 Rechtfertigender Notstand 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "Wer eine mit Strafe bedrohte Tat begeht, um ein eigenes oder das Rechtsgut einer anderen Person aus einer unmittelbaren, nicht anders abwendbaren Gefahr zu retten, handelt rechtm\u00e4ssig, wenn er dadurch h\u00f6herwertige Interessen wahrt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "Art. 17 Legitimate act in a situation of necessity Whoever carries out an act that carries a criminal penalty in order to save a legal interest of his own or of another from immediate and not otherwise avertable danger, acts lawfully if by doing so he safeguards interests of higher value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "As the adaptation of ParZu to legislative texts is still in its early stages, we cannot yet provide an assessment of how useful the output of the parser, once properly modified, will be to our task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Analysis", "sec_num": "4.3" }, { "text": "The annotations that the pre-processing routines discussed so far add to the text serve as the basis for the automatic recognition of domain-specific contexts. Style rules for legislative drafting often only apply to special contexts within a law. An example is the rule pertaining to the use of the modal sollen ('should'). The drafting guidelines forbid the use of this modal except in statements of purpose. Statements of purpose thus constitute a special context inside which the detection of an instance of sollen is not to trigger an error message. Other examples of contexts in which special style rules apply are transitional provisions (\u00dcbergangsbestimmungen), repeals and amendments of current legislation (Aufhebungen und \u00c4nderungen bisherigen Rechts), definitions of the subject of a law (Gegenstandsbestimmungen), definitions of the scope of a law (Geltungsbereichsbestimmungen), definitions of terms (Begriffsbestimmungen), as well as preambles (Pr\u00e4ambeln) and commencement clauses (Ingresse). A number of these contexts can be identified automatically by assessing an article's position in the text and certain keywords contained in its header. A statements of purpose, for instance, is usually the first article of a law, and its header usually contains the words Zweck ('purpose') or Ziel ('aim'). Similar rules can be applied to recognise transitional provisions, repeals and amendments of current legislation, and definitions of the subject and the scope of a law.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Recognition", "sec_num": "4.4" }, { "text": "Other contexts have to be detected at the sentential level. Definitions of terms, for instance, do not only occur as separate articles at the beginning of a law; they can also appear in the form of individual sentences throughout the text. As there is a whole range of style rules pertaining to legal definitions (e.g. \"a term must only be defined if it occurs at least three times in the text\"; \"a term must only be defined once within the same text\"; \"a term must not be defined by itself\"), the detection of this particular context (and its components: the term and the actual definition) is crucial to a style checker for legislative texts. 5 To identify legal definitions in the text, we have begun to adopt strategies developed in the context of legal information retrieval: Walter and Pinkal (2009) and de Maat and Winkels (2010), for instance, show that definitions in German court decisions and in Dutch laws respectively can be detected by searching for combinations of key words and sentence patterns typically used in these domain-specific contexts. In we have argued that this approach is also feasible with regard to Swiss legislative texts: our pilot study has shown that a substantial number of legal definitions can be detected even without resorting to syntactic analyses, merely by searching for typical string patterns such as 'X im Sinne dieser Verordnung ist/sind Y' ('X in the sense of this ordinance is/are Y'). We are currently working towards refining and extending the detection of legal definitions by including additional syntactic information yielded by the processes of chunking and parsing into the search patterns.", "cite_spans": [ { "start": 645, "end": 646, "text": "5", "ref_id": null }, { "start": 781, "end": 805, "text": "Walter and Pinkal (2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Context Recognition", "sec_num": "4.4" }, { "text": "Once the legal definitions occurring in a draft have been marked, the aforementioned style rules can be checked automatically (e.g. by searching the text for terms that are defined in a definition but occur less than three times in the remainder of the text; by checking if there are any two legal definitions that define the same term; by assessing if there are definitions where the defined term also occurs in the actual definition).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Recognition", "sec_num": "4.4" }, { "text": "After having outlined some of the main challenges that the peculiarities of legal language and legislative texts pose to the various pre-processing tasks, we now turn to the process of error modelling, i.e. the effort of transferring the guidelines for legislative drafting into concrete error detection mechanisms operating on the pre-processed texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Recognition", "sec_num": "4.4" }, { "text": "The first step towards error modelling consists in collecting the set of style rules that shall be applied to the input texts. The main source that we use for this purpose are the compilations of drafting guidelines published by the Swiss Federal Administration (Bundeskanzlei, 2003; Bundesamt f\u00fcr Justiz, 2007) . However, especially when it comes to linguistic issues, these two documents do not claim to provide an exhaustive set of writing rules. Much more so than the writing rules that are put in place in the domain of technical documentation, the rules used in legislative drafting are based on historically grown conventions, and there may well be conventions beyond what is explicitly written down in the Federal Administration's official drafting guidelines.", "cite_spans": [ { "start": 262, "end": 283, "text": "(Bundeskanzlei, 2003;", "ref_id": "BIBREF1" }, { "start": 284, "end": 311, "text": "Bundesamt f\u00fcr Justiz, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Sources", "sec_num": "5.1" }, { "text": "Consequently, we have also been collecting rule material from three additional sources. A first complementary source are the various drafting guidelines issued by cantonal governments (Regierungsrat des Kantons Z\u00fcrich, 2005; Regierungsrat des Kantons Bern, 2000) and, to a lesser extent, the drafting guidelines of the other German-speaking countries (Bundesministerium f\u00fcr Justiz, 2008; Bundeskanzleramt, 1990; Rechtsdienst der Regierung, 1990) and the European Union (Europ\u00e4ische Kommission, 2003) . A second source are academic papers dealing with specific issues of legislative drafting, such as Eisenberg (2007), Bratschi (2009) .", "cite_spans": [ { "start": 351, "end": 387, "text": "(Bundesministerium f\u00fcr Justiz, 2008;", "ref_id": null }, { "start": 388, "end": 411, "text": "Bundeskanzleramt, 1990;", "ref_id": "BIBREF2" }, { "start": 412, "end": 445, "text": "Rechtsdienst der Regierung, 1990)", "ref_id": null }, { "start": 469, "end": 499, "text": "(Europ\u00e4ische Kommission, 2003)", "ref_id": null }, { "start": 618, "end": 633, "text": "Bratschi (2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sources", "sec_num": "5.1" }, { "text": "Finally, legislative editors themselves constitute an invaluable source of expert knowledge. In order to learn of their unwritten codes of practice, we have established a regular exchange with the Central Language Services of the Swiss Federal Chancellery. Including the editors in the process is likely to prove essential for the acceptability of the methods that we develop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources", "sec_num": "5.1" }, { "text": "The next error modelling step consists in concretising and formalising the collected rules so that specific algorithms can be developed to search for violations of the rules in the pre-processed texts. Depending on the level of abstraction of a rule, this task is relatively straight-forward or it requires more extensive preliminary research:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "Concrete Rules A number of rules for legislative drafting define concrete constraints and can thus be directly translated into detection rules. Examples of such concrete rules are rules that prohibit the use of specific abbreviations (e.g. bzw. 'respectively'; z.B. 'e.g.'; d.h. 'i.e.') and of certain terms and phrases (e.g. grunds\u00e4tzlich 'in principle'; in der Regel 'as a general rule'). In such cases, error detection simply consists in searching for the respective items in the input text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "Some rules first need to be spelled out but can then also be formalised more or less directly: the rule stating that units of measurement must always be written out rather than abbreviated, for instance, requires that a list of such abbreviations of measuring units (e.g. m for meter, kg for kilogram, % for percent) is compiled whose entries can then be searched for in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "The formalisation of some other rules is somewhat more complicated but can still be derived more or less directly. The error detection strategies for these rules include accessing tags that were added during pre-processing or evaluating the environment of a potential error. For example, the rule stating that sentences introducing an enumeration must end in a colon can be checked by searching the text for tags that are not preceded by a colon; violations of the rule stating that an article must not contain more than three paragraphs can be detected by counting for each environment, the number of elements it contains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "Abstract Rules However, guidelines for legislative drafting frequently contain rules that define relatively abstract constraints. In order to be able to detect violations of such constraints, a linguistic concretisation of the rules is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "An example is the oft-cited rule that a sentence should only convey one statement or proposition (Bundesamt f\u00fcr Justiz, 2007, p. 358) . The error modelling for this rule is not straightforward: it is neither clear what counts as a statement in the context of a legislative text, nor is it obvious what forms sentences violating this rule exhibit. Linguistic indicators for the presence of a multipropositional sentence first need to be determined in in-depth analyses of legislative language. In H\u00f6fler (2011), we name a number of such indicators: among other things, sentence coordination, relative clauses introduced by the adverb wobei ('whereby'), and certain prepositions (e.g. vorbeh\u00e4ltlich 'subject to' or mit Ausnahme von 'with the exception of') can be signs that a sentence contains more than one statement.", "cite_spans": [ { "start": 97, "end": 133, "text": "(Bundesamt f\u00fcr Justiz, 2007, p. 358)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "Even drafting rules that look fairly specific at first glance may turn out to be in need of further linguistic concretisation. An example is the rule that states that in an enumeration, words that are shared between all enumeration elements should be bracketed out into the introductory sentence of the enumeration. If, for instance, each element of an enumeration starts with the preposition f\u00fcr ('for'), then that preposition belongs in the introductory sentence. The rule seems straight enough, but in reality, the situation is somewhat more complicated. Example (4) shows a case where a word that occurs at the beginning of all elements of an enumeration (the definite article die 'the') cannot be bracketed out into the introductory sentence: Even if one ignores the fact that the definite article in letters a and b is in fact not the same as the one in letter c (the former being plural, the latter singular), it is quite apparent that articles cannot be extracted from the elements of an enumeration without the nouns they specify. Even the seemingly simple rule in question is thus in need of a more linguistically informed concretisation before it can be effectively checked by machine. The examples illustrate that style guidelines for legislative writing are often kept at a level of abstraction that necessitates concretisations if one is to detect violations of the respective rules automatically. Besides the development of domainspecific pre-processing algorithms, the extensive and highly specialised linguistic research required for such concretisations constitutes the main task being tackled in this project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "Conflicting Rules A further challenge to error modelling arises from the fact that a large proportion of drafting guidelines for legislative texts do not constitute absolute constraints but rather have the status of general writing principles and rules of thumb. This fact has to be reflected in the feedback messages that the system gives to its users: what the tool detects are often not \"errors\" in the proper sense of the word but merely passages that the author or editor may want to reconsider.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "The fact that many style rules only define soft constraints also means that there may be conflicting rules. Consider, for instance, sentence (5): As far as the offender fails to pay the monetary penalty despite being granted an extended deadline for payment or a reduced daily penalty unit or fails to perform the community service despite being warned of the consequences, the alternative custodial sentence is executed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "On the one hand, this sentence must be considered a violation of the style rule that states that the main verb of a sentence (here execute) should be introduced as early as possible (Regierungsrat des Kantons Z\u00fcrich, 2005, p. 73 ). On the other hand, if the sentence was re-arranged in compliance with this rule -by switching the order of the main clause and the subsidiary clause -it would violate the rule stating that information is to be presented in temporal and causal order (Bundesamt f\u00fcr Justiz, 2007, p. 354) . This latter rule entails that the condition precedes its consequence. To be able to deal with such conflicting constraints, error detection strategies have to be assigned weights. However, one and the same rule may have different weights under different circumstances. In conditional sentences like the one shown above, the causality principle obviously weighs more than the rule that the main verb must be introduced early in the sentence. Such contextdependent rankings for individual style rules have to be inferred and corroborated by tailor-made corpus-linguistic studies.", "cite_spans": [ { "start": 182, "end": 228, "text": "(Regierungsrat des Kantons Z\u00fcrich, 2005, p. 73", "ref_id": null }, { "start": 496, "end": 517, "text": "Justiz, 2007, p. 354)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Concretisation and Formalisation", "sec_num": "5.2" }, { "text": "The number of drafts available to us is very limited -too limited to be used to test and refine the error models we develop. However, due to the complexity of the drafting process (multiple authors and editors, political intervention), laws that 7 Strafgesetzbuch (Criminal Code), SR 311.0 have already come into force still exhibit violations of specific style rules. We therefore resort to such already published laws to test and refine the error models we develop. To this aim, we have built a large corpus of legislative texts automatically annotated by the pre-processing routines we have described earlier in the paper (H\u00f6fler and Piotrowski, 2011) . The corpus contains the entire current federal legislation of Switzerland, i.e. the federal constitution, all cantonal constitutions, all federal acts and ordinances, federal decrees and treaties between the Confederation and individual cantons and municipalities. It allows us to try out and evaluate novel error detection strategies by assessing the number and types of true and false positives returned.", "cite_spans": [ { "start": 625, "end": 654, "text": "(H\u00f6fler and Piotrowski, 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Testing and Evaluation", "sec_num": "5.3" }, { "text": "In this paper, we have discussed the development of methods for the automated detection of violations of domain-specific style guidelines for legislative texts, and their implementation in a prototypical tool. We have illustrated how the approach of error modelling employed in automated style checkers for technical writing can be enhanced to meet the requirements of legislative editing. Two main sets of challenges are tackled in this process. First, domain-specific NLP methods for legislative drafts have to be provided. Without extensive adaptations, off-the-shelf NLP tools that have been trained on corpora of newspaper articles are not adequately equipped to deal with the peculiarities of legal language and legislative texts. Second, the error modelling for a large number of drafting guidelines requires a concretisation step before automated error detection strategies can be put in place. The substantial linguistic research that such concretisations require constitutes a core task to be carried out in the development of a style checker for legislative texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Examples of well-developed commercial tools that offer such style checking for technical texts are acrolinx IQ by Acrolinx and CLAT by IAI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Patentgerichtsgesetz (Patent Court Act), SR 173.41; for the convenience of readers, examples are also rendered in the (non-authoritative) English version published at http://www.admin.ch/ch/e/rs/rs.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Strahlenschutzverordnung (Radiological Protection Ordinance), SR 814.50; emphasis added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Strafgesetzbuch (Criminal Code), SR 311.0; emphasis added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Further rules for the use of legal definitions in Swiss law texts are provided byBratschi (2009).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Bundesverfassung (Federal Constitution), SR 101; emphasis added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The project is funded under SNSF grant 134701. The authors wish to thank the Central Language Services of the Swiss Federal Chancellery for their continued advice and support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Gesetzgebungsleitfaden: Leitfaden f\u00fcr die Ausarbeitung von Erlassen des Bundes", "authors": [ { "first": "Rebekka", "middle": [], "last": "Bratschi", "suffix": "" } ], "year": 2007, "venue": "LeGes", "volume": "20", "issue": "2", "pages": "191--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebekka Bratschi. 2009. \"Frau im Sinne dieser Bade- ordnung ist auch der Bademeister.\" Legaldefinitio- nen aus redaktioneller Sicht. LeGes, 20(2):191-213. Bundesamt f\u00fcr Justiz, editor. 2007. Gesetzgebungs- leitfaden: Leitfaden f\u00fcr die Ausarbeitung von Er- lassen des Bundes. Bern, 3. edition.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Gesetzestechnische Richtlinien", "authors": [ { "first": "", "middle": [], "last": "Bundeskanzlei", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bundeskanzlei, editor. 2003. Gesetzestechnische Richtlinien. Bern.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Handbuch der Rechtsf\u00f6rmlichkeit, Empfehlungen zur Gestaltung von Gesetzen und Rechtsverordnungen", "authors": [ { "first": "", "middle": [], "last": "Bundeskanzleramt", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bundeskanzleramt, editor. 1990. Handbuch der Recht- setzungstechnik, Teil 1: Legistische Leitlinien. Wien. Bundesministerium f\u00fcr Justiz, editor. 2008. Handbuch der Rechtsf\u00f6rmlichkeit, Empfehlungen zur Gestal- tung von Gesetzen und Rechtsverordnungen. Bunde- sanzeiger Verlag, K\u00f6ln.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modern Legal Drafting", "authors": [ { "first": "Peter", "middle": [], "last": "Butt", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Castle", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Butt and Richard Castle. 2006. Modern Legal Drafting. Cambridge University Press, Cambridge, UK, 2nd edition.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated classification of norms in sources of law", "authors": [ { "first": "Emile", "middle": [], "last": "De", "suffix": "" }, { "first": "Maat", "middle": [], "last": "", "suffix": "" }, { "first": "Radboud", "middle": [], "last": "Winkels", "suffix": "" } ], "year": 2010, "venue": "Semantic Processing of Legal Texts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emile de Maat and Radboud Winkels. 2010. Auto- mated classification of norms in sources of law. In Semantic Processing of Legal Texts. Springer, Berlin.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Verst\u00e4ndlichkeit als B\u00fcrgerrecht? Die Rechtsund Verwaltungssprache in der \u00f6ffentlichen Diskussion", "authors": [], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin M. Eichhoff-Cyrus and Gerd Antos, editors. 2008. Verst\u00e4ndlichkeit als B\u00fcrgerrecht? Die Rechts- und Verwaltungssprache in der \u00f6ffentlichen Diskus- sion. Duden, Mannheim, Germany.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Gemeinsamer Leitfaden des Europ\u00e4ischen Parlaments, des Rates und der Kommission f\u00fcr Personen, die in den Gemeinschaftsorganen an der Abfassung von Rechtstexten mitwirken", "authors": [ { "first": "Peter", "middle": [], "last": "Eisenberg", "suffix": "" } ], "year": 2003, "venue": "Denken wie ein Philosoph und schreiben wie ein Bauer", "volume": "", "issue": "", "pages": "105--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Eisenberg. 2007. Die Grammatik der Gesetzes- sprache: Was ist eine Verbesserung? In Andreas L\u00f6tscher and Markus Nussbaumer, editors, Denken wie ein Philosoph und schreiben wie ein Bauer, pages 105-122. Schulthess, Z\u00fcrich. Europ\u00e4ische Kommission, editor. 2003. Gemein- samer Leitfaden des Europ\u00e4ischen Parlaments, des Rates und der Kommission f\u00fcr Personen, die in den Gemeinschaftsorganen an der Abfassung von Rechts- texten mitwirken. Amt f\u00fcr Ver\u00f6ffentlichungen der Europ\u00e4ischen Gemeinschaften, Luxemburg.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "GER-TWOL und morphologische Desambiguierung f\u00fcr das Deutsche", "authors": [ { "first": "Mariikka", "middle": [], "last": "Haapalainen", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Majorin", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 10th Nordic Conference of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariikka Haapalainen and Ari Majorin. 1995. GER- TWOL und morphologische Desambiguierung f\u00fcr das Deutsche. In Proceedings of the 10th Nordic Conference of Computational Linguistics. University of Helsinki, Department of General Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Building corpora for the philological study of Swiss legal texts", "authors": [ { "first": "Stefan", "middle": [], "last": "H\u00f6fler", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Piotrowski", "suffix": "" } ], "year": 2011, "venue": "Journal for Language Technology and Computational Linguistics (JLCL)", "volume": "26", "issue": "2", "pages": "77--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan H\u00f6fler and Michael Piotrowski. 2011. Build- ing corpora for the philological study of Swiss legal texts. Journal for Language Technology and Com- putational Linguistics (JLCL), 26(2):77-90.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Detecting legal definitions for automated style checking in draft laws", "authors": [ { "first": "Stefan", "middle": [], "last": "H\u00f6fler", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "B\u00fcnzli", "suffix": "" }, { "first": "Kyoko", "middle": [], "last": "Sugisaki", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan H\u00f6fler, Alexandra B\u00fcnzli, and Kyoko Sugisaki. 2011. Detecting legal definitions for automated style checking in draft laws. Technical Report CL- 2011.01, University of Zurich, Institute of Computa- tional Linguistics, Z\u00fcrich.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multipropositionale Rechtss\u00e4tze an der Sprache erkennen", "authors": [ { "first": "Stefan", "middle": [], "last": "H\u00f6fler", "suffix": "" } ], "year": 2011, "venue": "LeGes", "volume": "22", "issue": "2", "pages": "259--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan H\u00f6fler. 2011. \"Ein Satz -eine Aussage.\" Multi- propositionale Rechtss\u00e4tze an der Sprache erkennen. LeGes, 22(2):259-279.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Kontrolliertes Deutsch: Linguistische und sprachpsychologische Leitlinien f\u00fcr eine (maschinell) kontrollierte Sprache in der Technischen Dokumentation", "authors": [ { "first": "Anne", "middle": [], "last": "Lehrndorfer", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Lehrndorfer. 1996. Kontrolliertes Deutsch: Lin- guistische und sprachpsychologische Leitlinien f\u00fcr eine (maschinell) kontrollierte Sprache in der Tech- nischen Dokumentation. G\u00fcnter Narr, T\u00fcbingen.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Recht verstehen. Verst\u00e4ndlichkeit, Missverst\u00e4ndlichkeit und Unverst\u00e4ndlichkeit von Recht. de Gruyter", "authors": [], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kent D. Lerch, editor. 2004. Recht verstehen. Verst\u00e4ndlichkeit, Missverst\u00e4ndlichkeit und Unver- st\u00e4ndlichkeit von Recht. de Gruyter, Berlin.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Is plain language better? A comparative readability study of plain language court forms", "authors": [ { "first": "Maria", "middle": [], "last": "Mindlin", "suffix": "" } ], "year": 2005, "venue": "Scribes Journal of Legal Writing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Mindlin. 2005. Is plain language better? A com- parative readability study of plain language court forms. Scribes Journal of Legal Writing, 10.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Controlled language: The next big thing in translation?", "authors": [ { "first": "Uwe", "middle": [], "last": "Muegge", "suffix": "" } ], "year": 2007, "venue": "", "volume": "7", "issue": "", "pages": "21--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uwe Muegge. 2007. Controlled language: The next big thing in translation? ClientSide News Magazine, 7(7):21-24.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Rhetorisch-stilistische Eigenschaften der Sprache des Rechtswesens", "authors": [ { "first": "Markus", "middle": [], "last": "Nussbaumer", "suffix": "" } ], "year": 2009, "venue": "Rhetorik und Stilistik/Rhetoric and Stylistics, Handbooks of Linguistics and Communication Science", "volume": "", "issue": "", "pages": "2132--2150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markus Nussbaumer. 2009. Rhetorisch-stilistische Eigenschaften der Sprache des Rechtswesens. In Ulla Fix, Andreas Gardt, and Joachim Knape, ed- itors, Rhetorik und Stilistik/Rhetoric and Stylis- tics, Handbooks of Linguistics and Communica- tion Science, pages 2132-2150. de Gruyter, New York/Berlin.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Richtlinien der Regierung des F\u00fcrstentums Liechtenstein \u00fcber die Grunds\u00e4tze der Rechtsetzung (Legistische Richtlinien)", "authors": [], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rechtsdienst der Regierung, editor. 1990. Richtli- nien der Regierung des F\u00fcrstentums Liechtenstein \u00fcber die Grunds\u00e4tze der Rechtsetzung (Legistische Richtlinien). Vaduz.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Rechtsetzungsrichtlinien des Kantons Bern", "authors": [], "year": 2000, "venue": "Richtlinien der Rechtsetzung", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regierungsrat des Kantons Bern, editor. 2000. Recht- setzungsrichtlinien des Kantons Bern. Bern. Regierungsrat des Kantons Z\u00fcrich, editor. 2005. Richtlinien der Rechtsetzung. Z\u00fcrich.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Two in one -can it work? Readability and translatability by means of controlled language", "authors": [ { "first": "Ursula", "middle": [], "last": "Reuther", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EAMT-CLAW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ursula Reuther. 2003. Two in one -can it work? Read- ability and translatability by means of controlled language. In Proceedings of EAMT-CLAW 2003.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Probabilistic part-of-speech tagging using decision trees", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the International Conference on New Methods in Language Processing", "volume": "", "issue": "", "pages": "44--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Lan- guage Processing, pages 44-49.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A new hybrid dependency parser for German", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Gerold", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Volk", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Warin", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the GSCL Conference", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Gerold Schneider, Martin Volk, and Martin Warin. 2009. A new hybrid dependency parser for German. In Proceedings of the GSCL Conference 2009, pages 115-124, T\u00fcbingen.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Parsing legal texts: A contrastive study with a view to knowledge managment applications", "authors": [ { "first": "Giulia", "middle": [], "last": "Venturi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the LREC 2008 Workshop on Semantic Processing of Legal Texts", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giulia Venturi. 2008. Parsing legal texts: A contrastive study with a view to knowledge managment applica- tions. In Proceedings of the LREC 2008 Workshop on Semantic Processing of Legal Texts, pages 1-10, Marakesh.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Definitions in court decisions: Automatic extraction and ontology acquisition", "authors": [ { "first": "Stephan", "middle": [], "last": "Walter", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2009, "venue": "Law, Ontologies and the Semantic Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Walter and Manfred Pinkal. 2009. Defini- tions in court decisions: Automatic extraction and ontology acquisition. In Joost Breuker, Pompeu Casanovas, Michel Klein, and Enrico Francesconi, editors, Law, Ontologies and the Semantic Web. IOS Press, Amsterdam.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Plain English for Lawyers", "authors": [ { "first": "Richard", "middle": [ "C" ], "last": "Wydick", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard C. Wydick. 2005. Plain English for Lawyers. Carolina Academic Press, 5th edition.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Architecture of the style checking system.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Illustration of the text segmentation provided by the tool. Excerpt: Article 14 of the Patent Court Act. (Token delimiters and any other tags not related to text segmentation have been omitted in the example.)", "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "text": "XML <...> <...><...> <...><...> <...><...>", "type_str": "table", "num": null, "content": "
Predefined
Helptexts1)
DetectionError IDID 203 Help Text2)3)
RulesHighlighted
Draft
Enriched Draft+
Error Report
ID span1)
Pre-Error80 [...]Output2)
processingDetection135 [...]Generation3)
203 [...]
Legislative DraftToken IDsHelp Text Documentation/
" } } } }