{ "paper_id": "W12-0301", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:10:24.417789Z" }, "title": "From Character to Word Level: Enabling the Linguistic Analyses of Inputlog Process Data", "authors": [ { "first": "Mari\u00eblle", "middle": [], "last": "Leijten", "suffix": "", "affiliation": {}, "email": "marielle.leijten@ua.ac.be" }, { "first": "Lieve", "middle": [], "last": "Macken", "suffix": "", "affiliation": {}, "email": "lieve.macken@hogent.be" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "", "affiliation": {}, "email": "veronique.hoste@hogent.be" }, { "first": "Eric", "middle": [], "last": "Van Horenbeeck", "suffix": "", "affiliation": {}, "email": "eric.vanhorenbeeck@ua.ac.be" }, { "first": "Luuk", "middle": [], "last": "Van Waes", "suffix": "", "affiliation": {}, "email": "luuk.vanwaes@ua.ac.be" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Keystroke-logging tools are widely used in writing process research. These applications are designed to capture each character and mouse movement as isolated events as an indicator of cognitive processes. The current research project explores the possibilities of aggregating the logged process data from the letter level (keystroke) to the word level by merging them with existing lexica and using NLP tools. Linking writing process data to lexica and using NLP tools enables researchers to analyze the data on a higher, more complex level. In this project the output data of Inputlog are segmented on the sentence level and then tokenized. However, by definition writing process data do not always represent clean and grammatical text. Coping with this problem was one of the main challenges in the current project. Therefore, a parser has been developed that extracts three types of data from the S-notation: word-level revisions, deleted fragments, and the final writing product. The within-word typing errors are identified and excluded from further analyses. At this stage the Inputlog process data are enriched with the following linguistic information: part-ofspeech tags, lemmas, chunks, syllable boundaries and word frequencies.", "pdf_parse": { "paper_id": "W12-0301", "_pdf_hash": "", "abstract": [ { "text": "Keystroke-logging tools are widely used in writing process research. These applications are designed to capture each character and mouse movement as isolated events as an indicator of cognitive processes. The current research project explores the possibilities of aggregating the logged process data from the letter level (keystroke) to the word level by merging them with existing lexica and using NLP tools. Linking writing process data to lexica and using NLP tools enables researchers to analyze the data on a higher, more complex level. In this project the output data of Inputlog are segmented on the sentence level and then tokenized. However, by definition writing process data do not always represent clean and grammatical text. Coping with this problem was one of the main challenges in the current project. Therefore, a parser has been developed that extracts three types of data from the S-notation: word-level revisions, deleted fragments, and the final writing product. The within-word typing errors are identified and excluded from further analyses. At this stage the Inputlog process data are enriched with the following linguistic information: part-ofspeech tags, lemmas, chunks, syllable boundaries and word frequencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Keystroke-logging is a popular method in writing research (Sullivan & Lindgren, 2006) to study the underlying cognitive processes (Berninger, 2012) . Various keystroke-logging programs have been developed, each with a different focus 1 . The programs differ in the events that are logged 1 A detailed overview of available keystroke logging programs can be found on http://www.writingpro.eu/ logging_programs.php. (keyboard and/or mouse, speech recognition), in the environment that is logged (a programspecific text editor, MS Word or all Windowsbased applications), in their combination with other logging tools (e.g., eye tracking and usability tools like Morae) and the analytic detail of the output files. Examples of keystrokelogging tools are:", "cite_spans": [ { "start": 58, "end": 85, "text": "(Sullivan & Lindgren, 2006)", "ref_id": "BIBREF10" }, { "start": 130, "end": 147, "text": "(Berninger, 2012)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Scriptlog: Text editor, Eyetracking (Str\u00f6mqvist, Holmqvist, Johansson, Karlsson, & Wengelin, 2006 ), \u2022 Inputlog: Windows environment, speech recognition (Leijten & Van Waes, 2006 ), \u2022 Translog: Text editor, integration of dictionaries (Jakobsen, 2006) (Wengelin et al., 2009) .", "cite_spans": [ { "start": 38, "end": 99, "text": "(Str\u00f6mqvist, Holmqvist, Johansson, Karlsson, & Wengelin, 2006", "ref_id": "BIBREF9" }, { "start": 155, "end": 180, "text": "(Leijten & Van Waes, 2006", "ref_id": "BIBREF6" }, { "start": 237, "end": 253, "text": "(Jakobsen, 2006)", "ref_id": "BIBREF4" }, { "start": 254, "end": 277, "text": "(Wengelin et al., 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Keystroke loggers' data output is mainly based on capturing each character and mouse movement as isolated events. In the current research project 2 we explore the possibilities of aggregating the logged process data from the letter level (keystroke) to the word level by merging them with existing lexica and using NLP tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Linking writing process data to lexica and using NLP tools enables us to analyze the data on a higher, more complex level. By doing so we would like to stimulate interdisciplinary research, and relate findings in the domain of writing research to other domains (e.g., Pragmatics, CALL, Translation studies, Psycholinguistics).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We argue that the enriched process data combined with temporal information (time stamps, action times and pauses) will further facilitate the analysis of the logged data and address innovative research questions. For instance, Is there a developmental shift in the pausing behaviors of writers related to word classes, e.g., before adjectives as opposed to before nouns (cf. cognitive development in language production)? Do translation segments correspond to linguistic units (e.g., comparing speech recognition and keyboarding)? Which linguistic shifts characterize substitutions as a sub type of revisions (e.g., linguistic categories, frequency)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A more elaborate example of a research question in which the linguistic information has added value is: Is the text prodcution of causal markers more cognitive demanding than the production of temporal markers? In reading research, evidence is found that it takes readers longer to process sentences or paragraphs that contain causal markers than temporal markers. Does the same hold for the production of these linguistic markers? Based on the linguistic information added to the writing process data researchers are now able to easily select causal and temporal markers and compare the process data from various perspectives (cf. step 4linguistic analyses).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The work described in this paper is based on the output of Inputlog 3 , but it can also be applied to the output of other keystroke-logging programs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To promote more linguisticallyoriented writing process research, Inputlog aggregates the logged process data from the character level (keystroke) to the word level. In a subsequent step, we use various Natural Language Processing (NLP) tools to further annotate the logged process data with different kinds of linguistic information: part-of-speech tags, lemmata, chunk boundaries, syllable boundaries, and word frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is structured as follows. Section 2 describes the output of Inputlog, and section 3 describes an intermediate level of analysis. Section 4 describes the flow of the linguistic analyses and the various linguistic annotations. Section 5 wraps up with some concluding remarks and suggestions for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inputlog is a word-processor independent keystroke-logging program that not only registers keystrokes, mouse movements, clicks and pauses in MS Word, but also in any other Windowsbased software applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "Keystroke-logging programs store the complete sequence of keyboard and/or mouse events in chronological order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "Figure 1 represents \"Volgend jaar\" ('Next Year') at the character and mouse action level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "The keyboard strokes, mouse movements, and mouse clicks are represented in a readable output for each action (e.g., 'SPACE' refers to the spacebar, LEFT Click is a left mouse click, and 'Movement' is a synthesized representation of a continuous mouse movement). Additionally, timestamps indicate when keys are pressed and released, and when mouse movements are made. For each keystroke in MSWord the position of the character in the document is represented as well as the total length of the document at that specific moment. This enables researchers to take the non-linearity of the writing process into account, which is the result of the execution of revisions during the text production. To represent the non-linearity of the writing process the S-notation is used. The S-notation (Kollberg & Severinson Eklundh, 2002) contains information about the revision types (insertion or deletion), the order of the revisions and the place in the text where the writing process was interrupted. The S-notation can be automatically generated from the keystroke-logging data and has become a standard in the representation of the non-linearity in writing processes. Figure 2 shows an example of the S-notation. The text is taken from an experiment with master students Multilingual Professional Communication who were asked to write a (Dutch) tweet about a conference (VWEC). The S-notation shows the final product and the process needed.", "cite_spans": [ { "start": 785, "end": 822, "text": "(Kollberg & Severinson Eklundh, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 1159, "end": 1167, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "Volgend\u2022jaar\u2022organiseert\u2022{#| 4 } 3 VWEC\u2022een\u2022{boeiend\u2022| 9 } 8 con gres\u2022[over\u2022'] 1 | 1 [met\u2022als\u2022thema| 10 ] 9 {over} 10 \u2022'Corporate\u2022Comm unication{'| 8 } 7 .[.] 2 | 2 [\u2022Wat\u2022levert\u2022het\u2022op?'.| 7 ] 6 \u2022Blijf\u2022[ons\u2022volge n\u2022op| 5 ] 4 {op\u2022de\u2022hoogte\u2022via| 6 } 5 \u2022www.vwec2012.be.| 3 \u2022 Figure 2. Example of S-notation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "The following conventions are used in Snotation: The writer formulates in one segment \"Volgend jaar organiseert VWEC een congres over\" ('Next year VWEC organises a conference on'). She decides to delete \"over\" (index 1) and then adds the remainder of her first draft \"met als thema 'Corporate Communication. Wat levert het op'?.\" ('themed 'Corporate Communication. What is in it for us'?.') She deletes a full stop and ends with \"Blijf ons volgen op www.vwec2012.be.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "\u2022 | i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "('Follow us on www.vwec2012.be'). The third revision is the addition of the hashtag before VWEC. Then she rephrases \"ons volgen op\" into \"op de hoogte via.\" She notices that her tweet is too long (max. 140 characters) and she decides to delete the subtitle of the conference. She adds the adjective \"boeiend\" ('interesting') to conference and ends by deleting \"met als thema\" ('themed').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inputlog", "sec_num": "2" }, { "text": "At the intermediate level, Inputlog data can also be used to analyze data at the digraph level, for instance, to study interkey intervals (or digraph latency) in relation to typing speed, keyboard efficiency of touch typists and others, dyslexia and keyboard fluency, biometric verification etc. For this type of research, logging data can be leveled up to an intermediate level in which two consecutive events are treated as a unit (e.g., unni-it).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intermediate level", "sec_num": "3" }, { "text": "Grabowski's research on the internal structure of students' keyboard skills in different writing tasks is a case in point (Grabowski, 2008) . He studied whether there are patterns of overall keyboard behavior and whether such patterns are stable across different (copying) tasks. Across tasks, typing speed turned out to be the most stable characteristic of a keyboard user. Another example is the work by Nottbush and his colleagues. Focusing on linguistic aspects of interkey intervals, their research (Nottbusch, 2010; Sahel, Nottbusch, Grimm, & Weingarten, 2008) shows that the syllable boundaries within words have an effect on the temporal keystroke succession.", "cite_spans": [ { "start": 122, "end": 139, "text": "(Grabowski, 2008)", "ref_id": "BIBREF3" }, { "start": 504, "end": 521, "text": "(Nottbusch, 2010;", "ref_id": "BIBREF7" }, { "start": 522, "end": 566, "text": "Sahel, Nottbusch, Grimm, & Weingarten, 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Intermediate level", "sec_num": "3" }, { "text": "Syllable boundaries lead to increased interkey intervals at the digraph level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intermediate level", "sec_num": "3" }, { "text": "In recent research Inputlog data has also been used to analyze typing errors at this level (Van Waes & Leijten, 2010 ). As will be demonstrated in the next section, typing errors complicate the analysis of logging data at the word and sentence level because the linear reconstruction is disrupted. For this purpose a large experimental corpus based on a controlled copying task was analyzed, focusing on five digraphs with different characteristics (frequency, keyboard distribution, left-right coordination). The results of a multilevel analysis show that there is no correlation between the frequency of a digraph and the chance that a typing error occurs. However, typing errors show a limited variation: pressing the adjacent key explains more than 40% of the errors, both for touch typists and others; the chance that a typing error is made is related to the characteristics of the digraph, and the individual typing style.", "cite_spans": [ { "start": 91, "end": 116, "text": "(Van Waes & Leijten, 2010", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Intermediate level", "sec_num": "3" }, { "text": "Moreover, the median pausing time preceding a typing error tends to be longer than the median interkey transitions of the intended digraph typed correctly. These results illustrate that further research should make it possible to identify and isolate typing errors in logged process data and build an algorithm to filter them during data preparation. This would benefit parsing at a later stage (see section 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intermediate level", "sec_num": "3" }, { "text": "As explained above, writing process data gathered via the traditional keystroke-logging tools are represented at the character level and produce non-linear data (containing sentence fragments, unfinished sentences/words and spelling errors). These two characteristics are the main obstacles that we need to cope with to analyze writing process data on a higher level. In this section we explain the flow of the linguistic analyses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flow of linguistic analyses", "sec_num": "4" }, { "text": "Natural Language Processing tools, such as partof-speech taggers, lemmatizers and chunkers are trained on (completed) sentences and words. Therefore, to use the standard NLP tools to enrich the process data with linguistic information, in a first step, words, word groups, and sentences are extracted from the process data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1 -aggregate letter to word level", "sec_num": "4.1" }, { "text": "The S-notation was used as a basis to further segment the data into sentences and tokenize them. A dedicated sentence segmenting and tokenizer module was developed to conduct this process. This dedicated module can cope with the specific S-notation annotations such as insertion, deletion and break markers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1 -aggregate letter to word level", "sec_num": "4.1" }, { "text": "As mentioned before, standard NLP tools are designed to work with clean, grammatically correct text. We thus decided to treat word-level revisions differently than higher-level revisions and to distinguish deleted fragments from the final writing product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2 -parsing the S-notation", "sec_num": "4.2" }, { "text": "We developed a parser that extracts three types of data from the S-notation: word-level revisions, deleted fragments, and the final writing product. The word-level revisions can be extracted from the S-notation by retaining all words with word-internal square or curly brackets (see excerpt 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2 -parsing the S-notation", "sec_num": "4.2" }, { "text": "(1 -word level revision) Delet[r]ion incorrect: Deletrion; correct: deletion In{s}ertion incorrect: Inertion; correct: insertion Conceptually, the deleted fragments can be extracted from the S-notation by retaining only the words and phrases that are surrounded by word-external square brackets (2); and the final product data can be obtained by deleting everything in between square brackets from the S-notation. In practice, the situation is more complicated as insertions and deletions can be nested.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2 -parsing the S-notation", "sec_num": "4.2" }, { "text": "An example of the three different data types extracted from the S-notation is presented in the excerpt below. To facilitate the readability of the resulting data, the indices are omitted (3). In sum, the output of Inputlog data is segmented in sentences and tokenized. The Snotation is divided into three types of revisions and the within-word typing errors are excluded from further analyses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2 -parsing the S-notation", "sec_num": "4.2" }, { "text": "Although the set-up of the Inputlog extension is largely language-independent, the NLP tools used are language-dependent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2 -parsing the S-notation", "sec_num": "4.2" }, { "text": "As proof-ofconcept, we provide evidence from English and Dutch (See Figure 3) . ", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 77, "text": "Figure 3)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Step 2 -parsing the S-notation", "sec_num": "4.2" }, { "text": "As standard NLP tools are trained on clean data, these tools are not suited for processing input containing spelling errors. Therefore, we only enrich the final product data and the deleted fragments with different kinds of linguistic annotations. As part-of-speech taggers typically use the surrounding local context to determine the proper part-of-speech tag for a given word (typically a window of two to three words and/or tags is used), the deletions in context are extracted from the S-notation to be processed by the part-of-speech tagger. The deleted fragments in context consist of the whole text string without the insertions and are only used to optimize the results of the linguistic annotation. For the shallow linguistic analysis, we used the LT 3 shallow parsing tools suite consisting of:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 3 -enriching process data with linguistic information", "sec_num": "4.3" }, { "text": "\u2022 a part-of-speech tagger (LeTsTAG),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 3 -enriching process data with linguistic information", "sec_num": "4.3" }, { "text": "\u2022 a lemmatizer (LeTsLEMM), and \u2022 a chunker (LeTsCHUNK).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 3 -enriching process data with linguistic information", "sec_num": "4.3" }, { "text": "The LT3 tools are platform-independent and hence run on Windows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 3 -enriching process data with linguistic information", "sec_num": "4.3" }, { "text": "The English PoS tagger uses the Penn Treebank tag set, which contains 45 distinct tags. The Dutch part-of-speech tagger uses the CGN tag set codes (Van Eynde, Zavrel, & Daelemans, 2000) , which is characterized by a high level of granularity. Apart from the word class, the CGN tag set codes a wide range of morpho-syntactic features as attributes to the word class. In total, 316 distinct tags are discerned.", "cite_spans": [ { "start": 147, "end": 185, "text": "(Van Eynde, Zavrel, & Daelemans, 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Part of speech tags", "sec_num": null }, { "text": "During lemmatization, for each orthographic token, the base form (lemma) is generated. For verbs, the base form is the infinitive; for most other words, this base form is the stem, i.e., the word form without inflectional affixes. The lemmatizers make use of the predicted PoS codes to disambiguate ambiguous word forms, e.g., Dutch \"landen\" can be an infinitive (base form \"landen\") or plural form of a noun (base form \"land\"). The lemmatizers were trained on the English and Dutch parts of the Celex lexical database respectively .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemmata", "sec_num": null }, { "text": "During text chunking syntactically related consecutive words are combined into nonoverlapping, non-recursive chunks on the basis of a fairly superficial analysis. The chunks are represented by means of IOB-tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunks", "sec_num": null }, { "text": "In the IOB-tagging scheme, each token belongs to one of the following three types: I (inside), O (outside) and B (begin); the B-en Itags are followed by the chunk type, e.g., B-VP, I-VP. We adapted the IOB-tagging scheme and added end tag (E) to explicitly mark the end of a chunk. Accuracy sores of part-of-speech taggers and lemmatizers typically fluctuate around 97% to 98%; accuracy scores of 95% to 96% are obtained for chunking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunks", "sec_num": null }, { "text": "After annotation, the final writing product, deleted fragments, and word-level corrections are aligned and the indices are restored. Figures 4 and 5 show how we enriched the logged process data with different kinds of linguistic information: lemmata, part-of-speech tags, and chunk boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunks", "sec_num": null }, { "text": "We further added some word-level annotations on the final writing product and the deletions, viz., syllable boundaries and word frequencies (see last two columns in Figures 4 and 5 ).", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 180, "text": "Figures 4 and 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Chunks", "sec_num": null }, { "text": "The syllabification tools were trained on Celex (http://lt3.hogent.be/en/tools/timbl -syllabification).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syllable boundaries:", "sec_num": null }, { "text": "Syllabification was approached as a classification task: a large instance base of syllabified data is presented to a classification algorithm, which automatically learns from it the patterns needed to syllabify unseen data. Accuracy scores for syllabification reside in the range of 92% to 95%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syllable boundaries:", "sec_num": null }, { "text": "Frequency lists for Dutch and English were compiled on the basis of Wikipedia pages, which were extracted from the XML dump of the Dutch and English Wikipedia of December 2011. We used the Wikipedia Extractor developed by Medialab 4 to extract the text from the wiki files. The Wikipedia text files were further tokenized and enriched with part-of-speech tags and 4 http://medialab.di.unipi.it/wiki/Wikipedia_Extractor Figure 4 Final writing product and word-level revisions enriched with linguistic information. lemmata. The Wikipedia frequency lists can thus group different word forms belonging to one lemma.", "cite_spans": [], "ref_spans": [ { "start": 419, "end": 427, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Word Frequency", "sec_num": null }, { "text": "The current version of the Dutch frequency list has been compiled on the basis of nearly 100 million tokens coming from 395,673 Wikipedia pages, which is almost half of the Dutch Wikipedia dump of December 2011.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Frequency", "sec_num": null }, { "text": "Frequencies are presented as absolute frequencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Frequency", "sec_num": null }, { "text": "In a final step we combine the process data with the linguistic information. Based on the time information provided by Inputlog, researchers can calculate various measures, e.g., length of a pause within, before and after lemmata, part-ofspeech tags, and at chunk boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 4 -combining process data with linguistic information", "sec_num": "4.4" }, { "text": "As an example Table 1 shows the mean pausing time before and after the adjectives and nouns in the tweet. Of course, this is a very small-scale example, but it shows the possibilities of exploring writing process data from a linguistic perspective. Table 1 . Example of process data and linguistic information", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 1", "ref_id": null }, { "start": 249, "end": 256, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Step 4 -combining process data with linguistic information", "sec_num": "4.4" }, { "text": "In this example the mean pausing time before adjectives is twice as long as before nouns. The pausing time after such a segment shows the opposite proportion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 4 -combining process data with linguistic information", "sec_num": "4.4" }, { "text": "Also pauses in the beginning of chunks are more than twice as long as in the middle of a chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 4 -combining process data with linguistic information", "sec_num": "4.4" }, { "text": "In this paper we presented how writing process data can be enriched with linguistic information. The annotated output facilitates the linguistic analysis of the logged data and provides a valuable basis for more linguistically-oriented writing process research. We hope that this perspective will further enrich writing process research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future research", "sec_num": "5" }, { "text": "In a first phase we only focused on English and Dutch, but the method can be easily applied to other languages as well provided that the linguistic tools are available for a Windows platform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional annotations and analyses", "sec_num": "5.1" }, { "text": "For the moment, the linguistic annotations are limited to part-of-speech tags, lemmata, chunk information, syllabification, and word frequency information, but can be extended, e.g., by n-gram frequencies to capture collocations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional annotations and analyses", "sec_num": "5.1" }, { "text": "By aggregating the logged process data from the character level (keystroke) to the word level, general statistics (e.g., total number of deleted or inserted words, pause length before nouns preceded by an adjective or not) can be generated easily from the output of Inputlog as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional annotations and analyses", "sec_num": "5.1" }, { "text": "At this point Inputlog is a standalone program that needs to be installed on the same local machine that is used to produce the texts. This makes sense as long as the heaviest part of the work is the logging of a writing process. However, extending the scope from a character based analysis device to a system that supplements fine-grained production and process information to various NLP tools is a compelling reason to rethink the overall architecture of the software.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Technical flow of Inputlog & linguistic tools", "sec_num": "5.2" }, { "text": "It is not feasible to install the necessary linguistic software with its accompanying databases on every device. By decoupling the capturing part from the analytics a research group will have a better view on the use of its hard-and software resources while also allowing to solve potential copyright issues. Inputlog is now pragmatically Windows-based, but with the new architecture any tool on any OS will be capable to exchange data and results. It will be possible to add an NLP module that receives Inputlog data through a communication layer. A workflow procedure then presents the data in order to the different NLP packages and collects the final output. Because all data traffic is done with XML files, cooperation between software with different creeds becomes conceivable. Finally, the module has an administration utility handling the necessary user authentication and permits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Technical flow of Inputlog & linguistic tools", "sec_num": "5.2" }, { "text": "FWO-Merging writing process data with lexica -2009-2012", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.inputlog.net/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This study is partially funded by a research grant of the Flanders Research Foundation (FWO 2009(FWO -2012.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The CELEX lexical database on CD-ROM", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" }, { "first": "R", "middle": [], "last": "Piepenbrock", "suffix": "" }, { "first": "&", "middle": [ "H" ], "last": "Van Rijn", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H., R. Piepenbrock, & H. van Rijn. (1993). The CELEX lexical database on CD-ROM. Philadelphia, PA: Linguistic Data Consortium.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The CELEX lexical database on CD-ROM", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" }, { "first": "R", "middle": [], "last": "Piepenbrock", "suffix": "" }, { "first": "H", "middle": [], "last": "Van Rijn", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H., Piepenbrock, R., & van Rijn, H. (1993). The CELEX lexical database on CD-ROM. Philadelphia, PA: Linguistic Data Consortium.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Past, Present, and Future Contributions of Cognitive Writing Research to Cognitive Psychology", "authors": [ { "first": "V", "middle": [], "last": "Berninger", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berninger, V. (2012). Past, Present, and Future Contributions of Cognitive Writing Research to Cognitive Psychology: Taylor and Francis.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The internal structure of university students' keyboard skills", "authors": [ { "first": "J", "middle": [], "last": "Grabowski", "suffix": "" } ], "year": 2008, "venue": "Journal of Writing Research", "volume": "1", "issue": "1", "pages": "27--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grabowski, J. (2008). The internal structure of university students' keyboard skills. Journal of Writing Research, 1(1), 27-52.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Translog: Research methods in translation", "authors": [ { "first": "A", "middle": [ "L" ], "last": "Jakobsen", "suffix": "" } ], "year": 2006, "venue": "Computer Keystroke Logging and Writing: Methods and Applications", "volume": "", "issue": "", "pages": "95--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakobsen, A. L. (2006). Translog: Research methods in translation. In K. P. H. Sullivan & E. Lindgren (Eds.), Computer Keystroke Logging and Writing: Methods and Applications (pp. 95-105). Oxford: Elsevier.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Studying writers' revising patterns with S-notation analysis", "authors": [ { "first": "P", "middle": [], "last": "Kollberg", "suffix": "" }, { "first": "K", "middle": [], "last": "Severinson Eklundh", "suffix": "" } ], "year": 2002, "venue": "Contemporary Tools and Techniques for Studying Writing", "volume": "", "issue": "", "pages": "89--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kollberg, P., & Severinson Eklundh, K. (2002). Studying writers' revising patterns with S-notation analysis. In T. Olive & C. M. Levy (Eds.), Contemporary Tools and Techniques for Studying Writing (pp. 89-104). Dordrecht: Kluwer Academic Publishers.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Inputlog: New Perspectives on the Logging of On-Line Writing", "authors": [ { "first": "M", "middle": [], "last": "Leijten", "suffix": "" }, { "first": "L", "middle": [], "last": "Van Waes", "suffix": "" } ], "year": 2006, "venue": "Computer Keystroke Logging and Writing: Methods and Applications", "volume": "", "issue": "", "pages": "73--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leijten, M., & Van Waes, L. (2006). Inputlog: New Perspectives on the Logging of On-Line Writing. In K. P. H. Sullivan & E. Lindgren (Eds.), Computer Keystroke Logging and Writing: Methods and Applications (pp. 73-94). Oxford: Elsevier.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Grammatical planning, execution, and control in written sentence production", "authors": [ { "first": "G", "middle": [], "last": "Nottbusch", "suffix": "" } ], "year": 2010, "venue": "Reading and Writing", "volume": "23", "issue": "7", "pages": "777--801", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nottbusch, G. (2010). Grammatical planning, execution, and control in written sentence production. Reading and Writing, 23(7), 777-801.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Written production of German compounds: Effects of lexical frequency and semantic transparency", "authors": [ { "first": "S", "middle": [], "last": "Sahel", "suffix": "" }, { "first": "G", "middle": [], "last": "Nottbusch", "suffix": "" }, { "first": "A", "middle": [], "last": "Grimm", "suffix": "" }, { "first": "R", "middle": [], "last": "Weingarten", "suffix": "" } ], "year": 2008, "venue": "Written Language and Literacy", "volume": "11", "issue": "2", "pages": "211--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahel, S., Nottbusch, G., Grimm, A., & Weingarten, R. (2008). Written production of German compounds: Effects of lexical frequency and semantic transparency. Written Language and Literacy, 11(2), 211-228.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What keystroke logging can reveal about writing", "authors": [ { "first": "S", "middle": [], "last": "Str\u00f6mqvist", "suffix": "" }, { "first": "K", "middle": [], "last": "Holmqvist", "suffix": "" }, { "first": "V", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "H", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "A", "middle": [], "last": "Wengelin", "suffix": "" } ], "year": 2006, "venue": "Computer Keystroke Logging and Writing: Methods and Applications", "volume": "", "issue": "", "pages": "45--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Str\u00f6mqvist, S., Holmqvist, K., Johansson, V., Karlsson, H., & Wengelin, A. (2006). What keystroke logging can reveal about writing. In K. P. H. Sullivan & E. Lindgren (Eds.), Computer Keystroke Logging and Writing: Methods and Applications (pp. 45-71). Oxford: Elsevier.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Computer Key-Stroke Logging and Writing", "authors": [ { "first": "K", "middle": [ "P H" ], "last": "Sullivan", "suffix": "" }, { "first": "E", "middle": [], "last": "Lindgren", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sullivan, K. P. H., & Lindgren, E. (2006). Computer Key-Stroke Logging and Writing. Oxford: Elsevier Science.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Part of Speech Tagging and Lemmatisation for the Spoken Dutch Corpus", "authors": [ { "first": "F", "middle": [], "last": "Van Eynde", "suffix": "" }, { "first": "J", "middle": [], "last": "Zavrel", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2000, "venue": "the second International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van Eynde, F., Zavrel, J., & Daelemans, W. (2000). Part of Speech Tagging and Lemmatisation for the Spoken Dutch Corpus. Paper presented at the Proceedings of the second International Conference on Language Resources and Evaluation (LREC), Athens, Greece.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The dynamics of typing errors in text production", "authors": [ { "first": "L", "middle": [], "last": "Van Waes", "suffix": "" }, { "first": "M", "middle": [], "last": "Leijten", "suffix": "" } ], "year": 2010, "venue": "12th International Conference of the Earli Special Interest Group on Writing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van Waes, L., & Leijten, M. (2010). The dynamics of typing errors in text production. Paper presented at the SIG Writing 2010, 12th International Conference of the Earli Special Interest Group on Writing, Heidelberg.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combined eyetracking and keystrokelogging methods for studying cognitive processes in text production", "authors": [ { "first": "A", "middle": [], "last": "Wengelin", "suffix": "" }, { "first": "M", "middle": [], "last": "Torrance", "suffix": "" }, { "first": "K", "middle": [], "last": "Holmqvist", "suffix": "" }, { "first": "S", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "D", "middle": [], "last": "Galbraith", "suffix": "" }, { "first": "V", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "R", "middle": [], "last": "Johansson", "suffix": "" } ], "year": 2009, "venue": "Behavior Research Methods", "volume": "41", "issue": "2", "pages": "337--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wengelin, A., Torrance, M., Holmqvist, K., Simpson, S., Galbraith, D., Johansson, V., & Johansson, R. (2009). Combined eyetracking and keystroke- logging methods for studying cognitive processes in text production. Behavior Research Methods, 41(2), 337-351.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Example of general analysis Inputlog." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "a break in the writing process with sequential number i; \u2022 {insertion} i : an insertion occurring after break i; \u2022 [deletion] i : a deletion occurring after break i. The example in Figure 2 can be read as follows:" }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Flow of the linguistic analyses." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Deleted fragments enriched with linguistic information." } } } }