{ "paper_id": "W12-0305", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:07:03.846450Z" }, "title": "LELIE: A Tool Dedicated to Procedure and Requirement Authoring", "authors": [ { "first": "Flore", "middle": [], "last": "Barcellini", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNAM", "location": { "addrLine": "41 Rue Gay Lussac", "settlement": "Paris", "country": "France" } }, "email": "flore.barcellini@cnam.fr" }, { "first": "Corinne", "middle": [], "last": "Grosse", "suffix": "", "affiliation": { "laboratory": "", "institution": "CNAM", "location": { "addrLine": "41 Rue Gay Lussac", "settlement": "Paris", "country": "France" } }, "email": "" }, { "first": "Camille", "middle": [], "last": "Albert", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Patrick", "middle": [], "last": "Saint-Dizier", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This short paper relates the main features of LELIE, phase 1, which detects errors made by technical writers when producing procedures or requirements. This results from ergonomic observations of technical writers in various companies.", "pdf_parse": { "paper_id": "W12-0305", "_pdf_hash": "", "abstract": [ { "text": "This short paper relates the main features of LELIE, phase 1, which detects errors made by technical writers when producing procedures or requirements. This results from ergonomic observations of technical writers in various companies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The main goal of the LELIE project is to produce an analysis and a piece of software based on language processing and artificial intelligence that detects and analyses potential risks of different kinds (first health and ecological, but also social and economical) in technical documents. We concentrate on procedural documents and on requirements (Hull et al. 2011) which are, by large, the main types of technical documents used in companies.", "cite_spans": [ { "start": 348, "end": 366, "text": "(Hull et al. 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "1" }, { "text": "Given a set of procedures (e.g., production launch, maintenance) over a certain domain produced by a company, and possibly given some domain knowledge (ontology, terminology, lexical), the goal is to process these procedures and to annotate them wherever potential risks are identified. Procedure authors are then invited to revise these documents. Similarly, requirements, in particular those related to safety, often exhibit complex structures (e.g., public regulations, to cite the worse case): several embedded conditions, negation, pronouns, etc., which make their use difficult, especially in emergency situations. Indeed, procedures as well as safety requirements are dedicated to action: little space should be left to personal interpretations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "1" }, { "text": "Risk analysis and prevention in LELIE is based on three levels of analysis, each of them potentially leading to errors made by operators in action:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "1" }, { "text": "1. Detection of inappropriate ways of writing: complex expressions, implicit elements, complex references, scoping difficulties (connectors, conditionals), inappropriate granularity level, involving lexical, semantic and pragmatic levels, inappropriate domain style, 2. Detection of domain incoherencies in procedures: detection of unusual ways of realizing an action (e.g., unusual instrument, equipment, product, unusual value such as temperature, length of treatment, etc.) with respect to similar actions in other procedures or to data extracted from technical documents, 3. Confrontation of domain safety requirements with procedures to check if the required safety constraints are met.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "1" }, { "text": "Most industrial areas have now defined authoring recommendations on the way to elaborate, structure and write procedures of various kinds. However, our experience with technical writers shows that those recommendations are not very strictly followed in most situations. Our objective is to develop a tool that checks ill-formed structures with respect to these recommendations and general style considerations in procedures and requirements when they are written.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "1" }, { "text": "In addition, authoring guidelines do not specify all the aspects of document authoring: our investigations on author practices have indeed identified a number of recurrent errors which are linguistic or conceptual which are usually not specified in authoring guidelines. These errors are basically identified from the comprehension difficulties encountered by technicians in operation using these documents to realize a task or from technical writers themselves which are aware of the errors they should avoid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objectives", "sec_num": "1" }, { "text": "Risk management and prevention is now a major issue. It is developed at several levels, in particular via probabilistic analysis of risks in complex situations (e.g., oil storage in natural caves). Detecting potential risks by analyzing business errors on written documents is a relatively new approach. It requires the taking into account of most of the levels of language: lexical, grammatical and style and discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "Authoring tools for simplified language are not a new concept; one of the first checkers was developed at Boeing 1 , initially for their own simplifyed English and later adapted for the ASD Simplified Technical English Specification 2 . A more recent language checking system is Acrolinx IQ by Acrolinx 3 . Some technical writing environments also include language checking functionality, e.g., MadPak 4 . Ament (2002) and Weiss 2000developed a number of useful methodological elements for authoring technical documents and error identification and correction.", "cite_spans": [ { "start": 406, "end": 418, "text": "Ament (2002)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "The originality of our approach is as follows. Authoring recommendations are made flexible and context-dependent, for example if negation is not allowed in instructions in general, there are, however, cases where it cannot be avoided because the positive counterpart cannot so easily be formulated, e.g., do not dispose of the acid in the sewer. Similarly, references may be allowed if the referent is close and non-ambiguous. However, this requires some knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "Following observations in cognitive ergonomics in the project, a specific effort is realized concerning the well-formedness (following grammatical and cognitive standards) of discourse structures and their regularity over entire documents (e.g., instruction or enumerations all written in the same way).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "The production of procedures includes some controls on contents, in particular action verb arguments, as indicated in the second objective above, via the Arias domain knowledge base, e.g., avoiding typos or confusions among syntactically and semantically well-identified entities such as instruments, products, equipments, values, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "There exists no real requirement analysis system based on language that can check the quality and the consistency of large sets of authoring recommendations. The main products are IBM Doors and Doors Trek 5 , Objecteering 6 , and Reqtify 7 , which are essentially textual databases with advanced visual and design interfaces, query facilities for retrieving specific requirements, and some traceability functions carried out via predefined attributes. These three products also include a formal language (essentially based on attribute-value pairs) that is used to check some simple forms of coherence among large sets of requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "The authoring tool includes facilities for Frenchspeaking authors who need to write in English, supporting typical errors they make via 'language transfer' (Garnier, 2011) . We will not address this point here.", "cite_spans": [ { "start": 156, "end": 171, "text": "(Garnier, 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "This project, LELIE, is based on the TextCoop system (Saint-Dizier, 2012), a system dedicated to language analysis, in particular discourse (including the taking into account of long-distance dependencies). This project also includes the Arias action knowledge base that stores prototypical actions in context, and can update them. It also includes an ASP (Answer Set Programming) solver 8 to check for various forms of incoherence and incompleteness. The kernel of the system is written in SWI Prolog, with interfaces in Java. The project is currently realized for French, an English version is under development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "The system is based on the following principles. First, the system is parameterized: the technical writer may choose the error types he wants to be checked, and the severity level for each error type when there are several such levels (e.g., there are several levels of severity associated with fuzzy terms which indeed show several levels of fuzziness). Second, the system simply tags elements identified as errors, the correction is left to the author. However, some help or guidelines are offered. For example, guidelines for reformulating a negative sentence into a positive one are proposed. Third, the way errors are displayed can be customized to the writer's habits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "We present below a kernel system that deals with the most frequent and common errors made by technical writers independently of the technical domain. This kernel needs an in-depth customization to the domain at stake. For example, the verbs used or the terminological preferences must be implemented for each industrial context. Our system offers the control operations, but these need to be associated with domain data. Finally, to avoid the variability of document formats, the system input is an abstract document with a minimal number of XML tags as required by the error detection rules. Managing and transforming the original text formats into this abstract format is not dealt with here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Situation and our contribution", "sec_num": "2" }, { "text": "In spite of several levels of human proofreading and validation, it turns out that texts still contain a large number of situations where recommendations are not followed. Reasons are analyzed in e.g. e.g., (B\u00e9guin, 2003) , (Mollo et al., 2004 (Mollo et al., , 2008 . Via ergonomics analysis of the activity of technical writers, we have identified several layers of recurrent error types, which are not in general treated by standard text editors such as Word or Visio, the favorite editors for procedures.", "cite_spans": [ { "start": 207, "end": 221, "text": "(B\u00e9guin, 2003)", "ref_id": "BIBREF2" }, { "start": 224, "end": 243, "text": "(Mollo et al., 2004", "ref_id": "BIBREF10" }, { "start": 244, "end": 265, "text": "(Mollo et al., , 2008", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "Here is a list of categories of errors we have identified. Some errors are relevant for a whole document, whereas others must only be detected in precise constructions (e.g., in instructions, which are the most constrained constructions):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 General layout of the document: size of sentences, paragraphs, and of the various forms of enumerations, homogeneity of typography, structure of titles, presence of expected structures such as summary, but also text global organization following style recommendations (expressed in TextCoop via a grammar), etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Morphology: in general passive constructions and future tenses must be avoided in instructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Lexical aspects: fuzzy terms, inappropriate terms such as deverbals, light verb constructions or modals in instructions, detection of terms which cannot be associated, in particular via conjunctions. This requires typing lexical data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Grammatical complexity: the system checks for various forms of negation, referential forms, sequences of conditional expressions, long sequences of coordination, complex noun complements, and relative clause embeddings. All these constructions often make documents difficult to understand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Uniformity of style over a set of instructions, over titles and various lists of equipments, uniformity of expression of safety warnings and advice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Correct position in the document of specific fields: safety precautions, prerequisites, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Structure completeness, in particular completeness of case enumerations with respect to to known data, completeness of equipment enumerations, via the Arias action base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Regular form of requirements: context of application properly written (e.g., via conditions) followed by a set of instructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "\u2022 Incorrect domain value, as detected by Arias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "When a text is analyzed, the system annotates the original document (which is in our current implementation a plain text, a Word or an XML document): revisions are only made by technical writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "Besides tags which must be as explicit as possible, colors indicate the severity level for the error considered (the same error, e.g., use of fuzzy term, can have several severity levels). The most severe errors must be corrected first. At the moment, we propose four levels of severity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "ERROR Must be corrected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "AVOID Preferably avoid this usage, think about an alternative, CHECK this is not really bad, but it is recommended to make sure this is clear; this is also used to make sure that argument values are correct, when a non-standard one is found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "ADVICE Possibly not the best language realization, but this is probably a minor problem. It is not clear whether there are alternatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "The model, the implementation and the results are presented in detail in (Barcellini et al., 2012) .", "cite_spans": [ { "start": 73, "end": 98, "text": "(Barcellini et al., 2012)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Categorizing language and conceptual errors found in technical documents", "sec_num": "3" }, { "text": "We have developed the first phase of the LELIE project: detecting authoring errors in technical documents that may lead to risks. We identified a number of errors: lexical, business, grammatical, and stylistic. Errors have been identified from ergonomics investigations. The system is now fully implemented on the TextCoop platform and has been evaluated on a number of documents. It is now of much interest to evaluate user's reactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "We have implemented the system kernel. The main challenge ahead of us is the customization to a given industrial context. This includes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "\u2022 Accurately testing the system on the company's documents so as to filter out a few remaining odd error detections,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "\u2022 Introducing the domain knowledge via the domain ontology and terminology, and enhancing the rules we have developed to take every aspect into account,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "\u2022 Analyzing and incorporating into the system the authoring guidelines proper to the company that may have an impact on understanding and therefore on the emergence of risks,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "\u2022 Implementing the interfaces between the original user documents and our system, with the abstract intermediate representation we have defined,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "\u2022 Customizing the tags expressing errors to the users profiles and expectations, and enhancing correction schemas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "When sufficiently operational, the kernel of the system will be made available on line, and probably the code will be available in open-source mode or via a free or low cost license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Perspectives", "sec_num": "4" }, { "text": "http://www.boeing.com/phantom/sechecker/ 2 ASD-STE100, http://www.asd-ste100.org/ 3 http://www.acrolinx.com/ 4 http://www.madcapsoftware.com/products/ madpak/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ibm.com/software/awdtools/ doors/ 6 http://www.objecteering.com/ 7 http://www.geensoft.com/ 8 For an overview of ASP see Brewka et al. (2011).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This project is funded by the French National Research Agency ANR. We also thanks reviewers and the companies that showed a strong interest in our project, let us access to their technical documents and allowed us to observed their technical writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Single Sourcing. Building modular documentation", "authors": [ { "first": "Kurt", "middle": [], "last": "Ament", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Ament. 2002. Single Sourcing. Building modular documentation, W. Andrew Pub.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Risk Analysis and Prevention: LELIE, a Tool dedicated to Procedure and Requirement Authoring, LREC 2012", "authors": [ { "first": "Flore", "middle": [], "last": "Barcellini", "suffix": "" }, { "first": "Camille", "middle": [], "last": "Albert", "suffix": "" }, { "first": "Corinne", "middle": [], "last": "Grosse", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flore Barcellini, Camille Albert, Corinne Grosse, Patrick Saint-Dizier. 2012. Risk Analysis and Pre- vention: LELIE, a Tool dedicated to Procedure and Requirement Authoring, LREC 2012, Istanbul.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Design as a mutual learning process between users and designers", "authors": [ { "first": "Patrice", "middle": [], "last": "B\u00e9guin", "suffix": "" } ], "year": 2003, "venue": "Interacting with computers", "volume": "", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrice B\u00e9guin. 2003. Design as a mutual learning pro- cess between users and designers, Interacting with computers, 15 (6).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Repository of Rules and Lexical Resources for Discourse Structure Analysis: the Case of Explanation Structures", "authors": [ { "first": "Sarah", "middle": [], "last": "Bourse", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Saint-Dizier", "suffix": "" } ], "year": 2011, "venue": "Communications of the ACM", "volume": "54", "issue": "12", "pages": "92--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Bourse, Patrick Saint-Dizier. 2012. A Repository of Rules and Lexical Resources for Discourse Struc- ture Analysis: the Case of Explanation Structures, LREC 2012, Istanbul. Gerhard Brewka, Thomas Eiter, Miros\u0142aw Truszczy\u0144ski. 2011. Answer set programming at a glance. Communications of the ACM 54 (12), 92-103.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic correction of adverb placement errors: an innovative grammar checker system for French users of English, Eurocall'10 proceedings", "authors": [ { "first": "Marie", "middle": [], "last": "Garnier", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie Garnier. 2012. Automatic correction of adverb placement errors: an innovative grammar checker system for French users of English, Eurocall'10 pro- ceedings, Elsevier.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Role of Knowledge in Discourse Comprehension: A Construction-Integration Model", "authors": [ { "first": "Walther", "middle": [], "last": "Kintsch", "suffix": "" } ], "year": 1988, "venue": "Psychological Review", "volume": "", "issue": "", "pages": "95--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walther Kintsch. 1988. The Role of Knowledge in Dis- course Comprehension: A Construction-Integration Model, Psychological Review, vol 95-2.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Requirements Engineering", "authors": [ { "first": "Elizabeth", "middle": [ "C" ], "last": "Hull", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Jackson", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Dick", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizabeth C. Hull, Kenneth Jackson, Jeremy Dick. 2011. Requirements Engineering, Springer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Rhetorical Structure Theory: Towards a Functional Theory of Text Organisation", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "TEXT", "volume": "8", "issue": "3", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C. Mann, Sandra A. Thompson. 1988. Rhetor- ical Structure Theory: Towards a Functional Theory of Text Organisation, TEXT 8 (3), 243-281. Sandra A. Thompson. (ed.), 1992. Discourse Description: diverse linguistic analyses of a fund raising text, John Benjamins.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Rhetorical Parsing of Natural Language Texts, ACL'97", "authors": [ { "first": "Dan", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Marcu. 1997. The Rhetorical Parsing of Natural Language Texts, ACL'97.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Theory and Practice of Discourse Parsing and Summarization", "authors": [ { "first": "Dan", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization, MIT Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Auto and alloconfrontation as tools for reflective activities", "authors": [ { "first": "Vanina", "middle": [], "last": "Mollo", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Falzon", "suffix": "" } ], "year": 2004, "venue": "Applied Ergonomics", "volume": "35", "issue": "6", "pages": "531--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanina Mollo, Pierre Falzon. 2004. Auto and allo- confrontation as tools for reflective activities. Ap- plied Ergonomics, 35 (6), 531-540.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The development of collective reliability: a study of therapeutic decisionmaking", "authors": [ { "first": "Vanina", "middle": [], "last": "Mollo", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Falzon", "suffix": "" } ], "year": 2008, "venue": "Theoretical Issues in Ergonomics Science", "volume": "9", "issue": "3", "pages": "223--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanina Mollo, Pierre Falzon. 2008. The development of collective reliability: a study of therapeutic decision- making, Theoretical Issues in Ergonomics Science, 9(3), 223-254.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Customizing RST for the Automatic Production of Technical Manuals", "authors": [ { "first": "Dietmar", "middle": [], "last": "R\u00f6sner", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 1992, "venue": "Aspects of Automated Natural Language Generation", "volume": "", "issue": "", "pages": "199--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dietmar R\u00f6sner, Manfred Stede. 1992. Customizing RST for the Automatic Production of Technical Manuals, In Robert Dale et al. (eds.) Aspects of Automated Natural Language Generation. Berlin: Springer, 199-214.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Generating multilingual technical documents from a knowledge base: The TECHDOC project", "authors": [ { "first": "Dietmar", "middle": [], "last": "R\u00f6sner", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 1994, "venue": "Proc. of the International Conference on Computational Linguistics, COLING-94", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dietmar R\u00f6sner, Manfred Stede. 1994. Generating multilingual technical documents from a knowledge base: The TECHDOC project, In: Proc. of the Inter- national Conference on Computational Linguistics, COLING-94, Kyoto.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Processing Natural Language Arguments with the TextCoop Platform", "authors": [ { "first": "Patrick", "middle": [], "last": "Saint-Dizier", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Saint-Dizier. 2012. Processing Natural Lan- guage Arguments with the TextCoop Platform, Jour- nal of Argumentation and Computation.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Writing remedies. Practical exercises for technical writing", "authors": [ { "first": "H", "middle": [], "last": "Edmond", "suffix": "" }, { "first": "", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edmond H. Weiss. 2000. Writing remedies. Practical exercises for technical writing, Oryx Press.", "links": null } }, "ref_entries": {} } }