{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:44:30.889623Z" }, "title": "MMPE: A Multi-Modal Interface Using Handwriting, Touch Reordering, and Speech Commands for Post-Editing Machine Translation", "authors": [ { "first": "Nico", "middle": [], "last": "Herbig", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Santanu", "middle": [], "last": "Pal", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Tim", "middle": [], "last": "D\u00fcwel", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kalliopi", "middle": [], "last": "Meladaki", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Mahsa", "middle": [], "last": "Monshizadeh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Vladislav", "middle": [], "last": "Hnatovskiy", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Antonio", "middle": [], "last": "Kr\u00fcger", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The shift from traditional translation to postediting (PE) of machine-translated (MT) text can save time and reduce errors, but it also affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Since this paradigm shift offers potential for modalities other than mouse and keyboard, we present MMPE, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT. Users can directly cross out or hand-write new text, drag and drop words for reordering, or use spoken commands to update the text in place. All text manipulations are logged in an easily interpretable format to simplify subsequent translation process research. The results of an evaluation with professional translators suggest that pen and touch interaction are suitable for deletion and reordering tasks, while speech and multi-modal combinations of select & speech are considered suitable for replacements and insertions. Overall, experiment participants were enthusiastic about the new modalities and saw them as useful extensions to mouse & keyboard, but not as a complete substitute.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The shift from traditional translation to postediting (PE) of machine-translated (MT) text can save time and reduce errors, but it also affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Since this paradigm shift offers potential for modalities other than mouse and keyboard, we present MMPE, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT. Users can directly cross out or hand-write new text, drag and drop words for reordering, or use spoken commands to update the text in place. All text manipulations are logged in an easily interpretable format to simplify subsequent translation process research. The results of an evaluation with professional translators suggest that pen and touch interaction are suitable for deletion and reordering tasks, while speech and multi-modal combinations of select & speech are considered suitable for replacements and insertions. Overall, experiment participants were enthusiastic about the new modalities and saw them as useful extensions to mouse & keyboard, but not as a complete substitute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As machine translation (MT) has been making substantial improvements in recent years 1 , more and more professional translators are integrating this technology into their translation workflows (Zaretskaya et al., 2016; Zaretskaya and Seghiri, 2018) . The process of using a pre-translated text as a basis and improving it to create the final translation is called post-editing (PE). While translation memory (TM) is still often valued higher than MT , a recent study by Vela et al. (2019) shows that professional translators chose PE of MT over PE of TM and translation from scratch in 80% of the cases. Regarding the time savings achieved through PE, Zampieri and Vela (2014) find that PE was on average 28% faster for technical translations, Toral et al. (2018) report productivity gains of 36% when using modern neural MT, and Aranberri et al. (2014) show that PE increases translation throughput for both professionals and lay users. Furthermore, it has been shown that PE not only leads to reduced time but also reduces errors (Green et al., 2013) .", "cite_spans": [ { "start": 193, "end": 218, "text": "(Zaretskaya et al., 2016;", "ref_id": "BIBREF22" }, { "start": 219, "end": 248, "text": "Zaretskaya and Seghiri, 2018)", "ref_id": "BIBREF21" }, { "start": 470, "end": 488, "text": "Vela et al. (2019)", "ref_id": "BIBREF18" }, { "start": 652, "end": 676, "text": "Zampieri and Vela (2014)", "ref_id": "BIBREF19" }, { "start": 744, "end": 763, "text": "Toral et al. (2018)", "ref_id": "BIBREF16" }, { "start": 822, "end": 853, "text": "MT, and Aranberri et al. (2014)", "ref_id": null }, { "start": 1032, "end": 1052, "text": "(Green et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "Switching from traditional translation to PE results in major changes in translation workflows (Zaretskaya and Seghiri, 2018) , including the interaction pattern (Carl et al., 2010) , yielding a significantly reduced amount of mouse and keyboard events (Green et al., 2013) . This requires thorough investigation in terms of interface design, since the task changes from mostly text production to comparing and adapting MT and TM proposals, or put differently, from control to supervision.", "cite_spans": [ { "start": 95, "end": 125, "text": "(Zaretskaya and Seghiri, 2018)", "ref_id": "BIBREF21" }, { "start": 162, "end": 181, "text": "(Carl et al., 2010)", "ref_id": "BIBREF4" }, { "start": 253, "end": 273, "text": "(Green et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "While most computer-aided translation (CAT) tools focus on traditional translation and incorporate only mouse & keyboard, previous research investigated other input modalities: automatic speech recognition (ASR) for dictating translations has already been explored in the 90s (Dymetman et al., 1994; Brousseau et al., 1995) and the more recent investigation of ASR for PE (Martinez et al., 2014) even argues that a combination with typing could boost productivity. Mesa-Lao (2014) finds that PE trainees have a positive attitude towards speech input and would consider adopting it, and Zapata et al. (2017) found that ASR for PE was faster than ASR for translation from scratch. Due to these benefits, commercial CAT tools like memoQ and MateCat are also beginning to integrate ASR.", "cite_spans": [ { "start": 276, "end": 299, "text": "(Dymetman et al., 1994;", "ref_id": "BIBREF6" }, { "start": 300, "end": 323, "text": "Brousseau et al., 1995)", "ref_id": "BIBREF3" }, { "start": 372, "end": 395, "text": "(Martinez et al., 2014)", "ref_id": "BIBREF11" }, { "start": 586, "end": 606, "text": "Zapata et al. (2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "The CASMACAT tool (Alabau et al., 2013) allows the user to input text by writing with e-pens in a special area. A vision paper (Alabau and Casacuberta, 2012) proposes to instead use e-pens for PE sentences with few errors in place and provides examples of symbols that could be used for this. Studies on mobile PE via touch and speech (O'Brien et al., 2014; Torres-Hostench et al., 2017) show that participants especially liked reordering words through touch drag and drop, and preferred voice when translating from scratch, but used the iPhone keyboard for small changes. Teixeira et al. 2019also explore a combination of touch and speech; however, their touch input received poor feedback since (a) their tile view (where each word is a tile that can be dragged around) made reading more complicated, and (b) touch insertions were rather complex to achieve within their implementation. In contrast, dictation functionality was shown to be quite good and even preferred to mouse and keyboard by half of the participants. The results of an elicitation study by Herbig et al. (2019a) indicate that pen, touch, and speech interaction should be combined with mouse and keyboard to improve PE of MT. In contrast, other modalities like eye tracking or gestures were seen as less promising. This paper presents MMPE, the first translation environment combining standard mouse & keyboard input with touch, pen, and speech interactions for PE of MT. It allows users to directly cross out or hand-write new text, drag and drop words for reordering, or use spoken commands to update the text in place. All text manipulations are logged in an easily interpretable format (e.g., replaceWord with the old and new word) to facilitate translation process research. The results of a study with 11 professional translators show that participants are enthusiastic about having these alternatives, and suggest that pen and touch are well suited for deletion and reordering operations, whereas speech and multi-modal interaction are suitable for insertions and replacements.", "cite_spans": [ { "start": 18, "end": 39, "text": "(Alabau et al., 2013)", "ref_id": "BIBREF0" }, { "start": 127, "end": 157, "text": "(Alabau and Casacuberta, 2012)", "ref_id": "BIBREF1" }, { "start": 335, "end": 357, "text": "(O'Brien et al., 2014;", "ref_id": "BIBREF14" }, { "start": 358, "end": 387, "text": "Torres-Hostench et al., 2017)", "ref_id": "BIBREF17" }, { "start": 1061, "end": 1082, "text": "Herbig et al. (2019a)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "This section presents the MMPE prototype (see Figure 1 ), which combines pen, touch, and speech input with a traditional mouse and keyboard approach for PE of MT. The prototype is designed for professional translators in an office setting. A video demonstration is available at https://youtu.be/ tkJ9OWmDd0s.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 54, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The MMPE Prototype", "sec_num": "2" }, { "text": "On the software side, we decided to use Angular 2 for the frontend, and node.js 3 for the backend.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Apparatus", "sec_num": "2.1" }, { "text": "The frontend, including all of the newly implemented modalities for text editing, is what the system currently focuses on. While this Angular frontend could be used in a browser on any device, we initially design for the following hardware to optimally support the implemented interactions: we use a large tiltable touch & pen screen (see Figure 1a) , namely the Wacom Cintiq Pro 32 inch display. Together with the Flex Arm, this screen can be moved up in the air to work in a standing position, or it can be tilted and moved flat on the table (similar to how users use a tablet), thereby supporting better pen and touch interaction (as requested in Herbig et al. (2019a) ). To avoid limitations in ASR through a potentially bad microphone, we further use the Sennheiser PC 8 Headset for speech input. Last, mouse and keyboard are provided.", "cite_spans": [ { "start": 650, "end": 671, "text": "Herbig et al. (2019a)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 339, "end": 349, "text": "Figure 1a)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Apparatus", "sec_num": "2.1" }, { "text": "Since it is not the focus of this work, the backend is kept rather minimal: it allows saving and loading of projects (including the MT) from JSON files, can store log files, etc. Here, the project files simply contain an array of segments with source, target, as well as any MT or TM proposal that should initially be shown for PE. Figure 1d shows our implemented horizontal source-target layout, where each segment's status (unedited, edited, confirmed) is visualized between source and target. On the far right, support tools are offered as requested in Herbig et al. (2019a):(1) the unedited MT output, to which the user can revert his editing using a button, and (2) a corpus combined with a dictionary: when entering a word or clicking/touching a word in the source view on the left, the Linguee 4 website is queried to show the word in context and display its primary and alternative translations. The top of the interface shows a toolbar where users can enable or disable speech recognition as well as spell checking, save and load projects, or navigate to another project.", "cite_spans": [ { "start": 425, "end": 454, "text": "(unedited, edited, confirmed)", "ref_id": null } ], "ref_spans": [ { "start": 332, "end": 341, "text": "Figure 1d", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Apparatus", "sec_num": "2.1" }, { "text": "The current segment is enlarged, thereby offering space for handwritten input and allowing the user to view a lot of context while still seeing the current segment in a comfortable manner (Herbig et al. (2019a)). The view for the current segment is further divided into the source segment (left) and two editing planes for the target, one for handwriting and drawing gestures (middle), and one for touch deletion & reordering, as well as standard mouse and keyboard input (right). Both initially show the MT proposal, and synchronize on changes to either one. The reason for having two editing fields instead of only one is that some interactions are overloaded, e.g., a touch drag can be interpreted as both hand-writing (middle) and reordering (right). Undo and redo functionality for all modalities, as well as confirming segments, are also implemented through buttons between the source and target texts, and can further be triggered through hotkeys. The target text is spell-checked, as a lack of this feature was criticized in Teixeira et al. (2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Layout", "sec_num": "2.2" }, { "text": "For handwriting recognition (see Figure 1b) , we use the MyScript Interactive Ink SDK 5 . Apart from merely recognizing the written input, it offers ges-5 https://developer.myscript.com/, accessed 07. Jan 2020 tures 6 like strike-through or scribble for deletions, breaking a word into two (draw line from top to bottom), and joining words (draw line from bottom to top). For inserting words, one can directly write into empty space, or create such space first by breaking the line (draw a long line from top to bottom), and hand-writing the word then. All changes are immediately interpreted, i.e., striking through a word deletes it immediately instead of showing it in a struck-through visualization. While it is not necessary to convert text from the handwritten appearance into computer font, the user can do so using a small button at the top of the editor. The editor further shows the recognized handwritten text immediately at the very top of the drawing view in a small gray font, where alternatives for the current recognition are offered when clicking on a recognized word. Since all changes from this drawing view are immediately synchronized into the right-hand view, the user can also see the recognized text there. Apart from using the pen, the user can use his/her finger or the mouse on the left-hand editing view for hand-writing.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 43, "text": "Figure 1b)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Left Target View: Handwriting", "sec_num": "2.3" }, { "text": "On the right-hand editing view, the user can delete words by simply double-tapping them with pen/finger touch, or reorder them through a simple drag and drop procedure (see Figure 1c) . This procedure visualizes the picked-up word as well as the current drop position through a placeholder element. Spaces between words and punctuation marks are automatically fixed, i.e., double spaces at the pickup position and missing spaces at the drop position are corrected. This reordering functionality is strongly related to Teixeira et al. 2019; however, only the currently dragged word is temporarily visualized as a tile to offer better readability. Furthermore, the cursor can be placed between words using a single tap, allowing the user to combine touch input with e.g., the speech or keyboard modalities (see below). Naturally, the user can also edit and navigate using mouse and keyboard, where all common shortcuts work as expected from other software (e.g., ctrl+arrow keys or ctrl+c).", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 183, "text": "Figure 1c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Right Target View: Touch Reordering, Mouse & Keyboard", "sec_num": "2.4" }, { "text": "To minimize lag during speech recognition, we use a streaming approach, sending the recorded audio to IBM Watson servers to receive a transcription, which is then interpreted in a command-based fashion. Thus, our speech module not only handles dictations as in Teixeira et al. (2019) but can correct mistakes in place. The transcription itself is visualized at the top of the right target view (see Figure 1c) . As commands, the user has the option to \"insert\", \"delete\", \"replace\", and \"reorder\" words or subphrases. To specify the position if it is ambiguous, one can define anchors as in \"after\"/\"before\"/\"between\", or define the occurrence of the token (\"first\"/\"second\"/\"last\"). A full example is \"insert A after second B\", where A and B can be words or subphrases. In contrast to the other modalities, character-level commands are not supported, so instead of deleting an ending, one should replace the word. Again, spaces between words and punctuation marks are automatically fixed upon changes. For the German language, nouns are automatically capitalized using the list of nouns from Wiktionary 7 .", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 409, "text": "Figure 1c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Speech Input", "sec_num": "2.5" }, { "text": "Last, the user can use a multi-modal combination, i.e., pen/touch/mouse combined with speech. For this, a target word/position first needs to be specified by placing the cursor on or next to a word using the pen, finger touch, or the mouse/keyboard; alternatively, the word can be long-pressed with pen/touch. Afterwards, the user can use a voice command like \"delete\", \"insert A\", \"move after/before A/between A and B\", or \"replace by A\" without needing to specify the position/word, thereby making the commands less complex.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-modal Combinations", "sec_num": "2.6" }, { "text": "We implemented extensive logging functionality: on the one hand, we log the concrete keystrokes, touched pixel coordinates, etc.; on the other hand, all UI interactions (like segmentChange or undo/redo/confirm) are stored, allowing us to analyze the translator's use of MMPE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logging", "sec_num": "2.7" }, { "text": "Most importantly, however, we also log all text manipulations on a higher level to simplify text editing analysis: for insertions, we log whether a single or multiple words were inserted, and add the actual words and their positions as well as the segment's content before and after the insertion to the log entry. Deletions are logged analogously, and for reorderings, we add the old and the new position of the moved words to the log entry. Last, for replacements, we log whether only a part of a word was replaced (i.e., changing the word form), whether the whole word was replaced (i.e., correcting the lexical choice), or whether a group of words was replaced. In all cases, the words before and after the change, as well as their positions and the overall segment text are specified in the log entry.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logging", "sec_num": "2.7" }, { "text": "Furthermore, all log entries contain the modality that was used for the interaction, e.g., Speech or Pen, thereby allowing the analysis of which modality was used for which editing operation. All log entries with their timestamps are created within the Angular client and sent to the node.js server for storage in a JSON file.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logging", "sec_num": "2.7" }, { "text": "We evaluated the prototype with 11 professional translators 8 . Since our participants were German natives, we chose a EN-DE translation task to avoid ASR recognition errors occurring in non-native commands (Dragsted et al., 2011) . In the following, \"modalities\" refers to Touch (T), Pen (P), Speech (S), Mouse & Keyboard (MK), and Multi-Modal combinations (MM, see Section 2.6), while \"operations\" refers to Insertions, Deletions, Replacements, and Reorderings. More details on the evaluation are presented in Herbig et al. (2020) .", "cite_spans": [ { "start": 207, "end": 230, "text": "(Dragsted et al., 2011)", "ref_id": "BIBREF5" }, { "start": 512, "end": 532, "text": "Herbig et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "The study took approximately 2 hours per participant and involved three separate stages. First, participants filled in a questionnaire capturing demographics as well as information on CAT usage. In stage two, participants received an explanation of all of the prototype's features and then had 10-15 minutes to explore the prototype on their own and become familiar with the interface. Finally, stage three included the main experiment, which is a guided test of all implemented features combined with Likert scales and interviews, as described in detail below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3.1" }, { "text": "The main part tests each of the 5 modalities for each of our 4 operations in a structured way. For this, we prepared four sentences for each operation by manually introducing errors into the reference sentences from the WMT news test set 2018. Thus, overall each participant had to correct 4 segments per operation (4) using each modality (5), which results in 4 \u00d7 4 \u00d7 5 = 80 segments. Within the four sentences per operation, we tried to capture slightly different cases, like deleting single words or a group of words. The prototype was adapted for this controlled task such that it displays a popup when selecting a segment, visualizing the necessary correction to apply as well as the modality to use. The reason why we provided the correction to apply was to ensure a consistent editing behavior across all participants, thereby making the following measurements comparable: each modality had to be rated for each operation on 7-point Likert scales assessing whether the modality is a good fit, whether it is easy to use, and whether it is a good alternative to MK. Furthermore, participants had to order the modalities from best to worst for each operation. Last, we captured their comments in an interview after each operation and measured the times required to fix the introduced errors. In the end, a final unstructured interview to capture highlevel feedback on the interface was conducted. Figure 2 depicts the results of the 3 Likert scales of the 5 modalities for the 4 tasks. The participants' orderings of modalities for the operations were mostly in line with these ratings, as we will discuss in the next sections.", "cite_spans": [], "ref_spans": [ { "start": 1401, "end": 1409, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Method", "sec_num": "3.1" }, { "text": "According to subjective ratings, modality ordering, and comments, P(en) is among the best modalities for deletions and reordering. However, other modalities are superior for insertions and replacements, where P was seen as suitable only for short modifications, and to be avoided for more extended changes. In terms of timings, P was also among the fastest for deletions and reorderings, and among the slowest for insertions. What is interesting, however, is that P was significantly faster than S and MM for replacements (by 6 and 7 seconds on average) even though it was rated lower. Participants also commented very enthusiastically about pen reordering and deletions, as they would nicely resemble manual copy-editing. The main concern for hand-writing was the need to think about and to create space before actually writing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "3.2" }, { "text": "Results for T(ouch) were similarly good for deletions and reorderings, but it was considered worse for insertions and replacements. Furthermore, and as we expected due to its precision, pen was preferred to finger touch by most participants. However, in terms of timings, the two did not differ significantly, apart from replace operations (where pen was faster). Even for replacements, where T was rated as the worst modality, it actually was (non-significantly) faster than S and MM. S(peech) and M(ulti)-M(odal) PE were considered the worst and were also the slowest modalities for reordering and deletions. For insertions and replacements, however, these two modalities were rated and ordered 2 nd (after MK) and in particular much better than P and T. Timing analysis agrees for insertions, being 2 nd after MK; for replacements, however, S and MM were the slowest even though the ratings put them ahead of P and T. Insertions are the only operation where MM was (non-significantly) faster than S, since the position did not have to be verbally specified. Even though participants were concerned regarding formulating commands while mentally processing text, they considered S and MM especially interesting for adding longer text. The main advantage of MM would be that one has to speak less, albeit at the cost of doing two things at once. M(ouse) & K(eyboard) received the best scores for insertions and replacements, where it was also the fastest. Furthermore, it got good ratings for deletions and reorderings. For deletions, MK was comparably fast to P, T, and S. For reordering, however, it was slower than P and T. Some participants commented negatively on MK, stating that it only works well because of \"years of expertise\", and being \"unintuitive\" especially for reordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "3.2" }, { "text": "Overall, many participants provided very positive feedback on this first prototype combining pen, touch, speech, and multi-modal combinations for PE MT, encouraging us to continue. They especially highlighted that it was nice to have the option to switch between modalities. Furthermore, several promising ideas for improving the prototype were proposed, e.g., to visualize whitespaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "3.2" }, { "text": "While more and more professional translators are switching to the use of PE to increase productivity and reduce errors, current CAT interfaces still heavily focus on traditional mouse and keyboard input. This paper therefore presents MMPE, a CAT prototype combining pen, touch, speech, and multimodal interaction together with common mouse and keyboard input possibilities. Users can directly cross out or hand-write new text, drag and drop words for reordering, or use spoken commands to update the text in place. Our study with professional translators shows a high level of interest and enthusiasm about using these new modalities. For deletions and reorderings, pen and touch both received high subjective ratings, with pen being even better than mouse & keyboard. For insertions and replacements, speech and multi-modal interaction were seen as suitable interaction modes; however, mouse & keyboard were still favored and faster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "As a next step, we will improve the prototype based on the participants' valuable feedback. Furthermore, an eye tracker will be integrated into the prototype that can be used in combination with speech for cursor placement, thereby simplifying multi-modal PE. Last, we will investigate whether using the different modalities has an impact on cognitive load during PE (Herbig et al., 2019b) .", "cite_spans": [ { "start": 367, "end": 389, "text": "(Herbig et al., 2019b)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "WMT 2019 translation task: http://matrix.statmt.org/, accessed 07. Jan 2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://angular.io/, accessed 07. Jan 2020 3 https://nodejs.org/en/, accessed 07. Jan 2020 4 https://www.linguee.com/, accessed 07. Jan 2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://developer.myscript.com/docs/concepts/editinggestures/, accessed 07. Jan 2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wiktionary.org/wiki/Category:German noun forms, accessed 07. Jan 2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The study has been approved by the university's ethical review board, and participants were paid for their time. The data and analysis scripts can be found at https://mmpe. dfki.de/data/ACL2020/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was funded in part by the German Research Foundation (DFG) under grant number GE 2819/2-1 (project MMPE). We thank AMPLEXOR (https://www.amplexor.com) for their excellent support in providing access to professional human translators for our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "CAS-MACAT: An open source workbench for advanced computer aided translation", "authors": [ { "first": "Ragnar", "middle": [], "last": "Vicent Alabau", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bonk", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Carl", "suffix": "" }, { "first": "Mercedes", "middle": [], "last": "Casacuberta", "suffix": "" }, { "first": "Jes\u00fas", "middle": [], "last": "Garc\u00eda-Mart\u00ednez", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Gonz\u00e1lez", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Bartolom\u00e9", "middle": [], "last": "Leiva", "suffix": "" }, { "first": "", "middle": [], "last": "Mesa-Lao", "suffix": "" } ], "year": 2013, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "100", "issue": "", "pages": "101--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vicent Alabau, Ragnar Bonk, Christian Buck, Michael Carl, Francisco Casacuberta, Mercedes Garc\u00eda- Mart\u00ednez, Jes\u00fas Gonz\u00e1lez, Philipp Koehn, Luis Leiva, Bartolom\u00e9 Mesa-Lao, et al. 2013. CAS- MACAT: An open source workbench for advanced computer aided translation. The Prague Bulletin of Mathematical Linguistics, 100:101-112.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Study of electronic pen commands for interactivepredictive machine translation", "authors": [ { "first": "Vicent", "middle": [], "last": "Alabau", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Casacuberta", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the International Workshop on Expertise in Translation and Post-Editing -Research and Application", "volume": "", "issue": "", "pages": "17--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vicent Alabau and Francisco Casacuberta. 2012. Study of electronic pen commands for interactive- predictive machine translation. In Proceedings of the International Workshop on Expertise in Trans- lation and Post-Editing -Research and Application, Copenhagen, Denmark, pages 17-18.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Comparison of post-editing productivity between professional translators and lay users", "authors": [ { "first": "Nora", "middle": [], "last": "Aranberri", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Kepa", "middle": [], "last": "Diaz De Ilarraza", "suffix": "" }, { "first": "", "middle": [], "last": "Sarasola", "suffix": "" } ], "year": 2014, "venue": "Proceeding of AMTA Third Workshop on Post-editing Technology and Practice", "volume": "", "issue": "", "pages": "20--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nora Aranberri, Gorka Labaka, A Diaz de Ilarraza, and Kepa Sarasola. 2014. Comparison of post-editing productivity between professional translators and lay users. In Proceeding of AMTA Third Workshop on Post-editing Technology and Practice, pages 20- 33.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "French speech recognition in an automatic dictation system for translators: The TransTalk project", "authors": [ { "first": "Julie", "middle": [], "last": "Brousseau", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Drouin", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Normandin", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Plamondon", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Eurospeech Fourth European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "193--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Brousseau, Caroline Drouin, George Foster, Pierre Isabelle, Roland Kuhn, Yves Normandin, and Pierre Plamondon. 1995. French speech recognition in an automatic dictation system for translators: The TransTalk project. In Proceedings of Eurospeech Fourth European Conference on Speech Communi- cation and Technology, pages 193-196.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Long distance revisions in drafting and post-editing", "authors": [ { "first": "Michael", "middle": [], "last": "Carl", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Jensen", "suffix": "" }, { "first": "Kay", "middle": [ "Kristian" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "CICLing Special Issue on Natural Language Processing and its Applications", "volume": "", "issue": "", "pages": "193--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Carl, Martin Jensen, and Kay Kristian. 2010. Long distance revisions in drafting and post-editing. CICLing Special Issue on Natural Language Pro- cessing and its Applications, pages 193-204.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Speaking your translation: Students' first encounter with speech recognition technology", "authors": [ { "first": "Barbara", "middle": [], "last": "Dragsted", "suffix": "" }, { "first": "Inger", "middle": [ "Margrethe" ], "last": "Mees", "suffix": "" }, { "first": "Inge Gorm", "middle": [], "last": "Hansen", "suffix": "" } ], "year": 2011, "venue": "Translation & Interpreting", "volume": "3", "issue": "1", "pages": "10--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Dragsted, Inger Margrethe Mees, and Inge Gorm Hansen. 2011. Speaking your transla- tion: Students' first encounter with speech recog- nition technology. Translation & Interpreting, 3(1):10-43.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Towards an automatic dictation system for translators: The TransTalk project", "authors": [ { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Brousseau", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Normandin", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Plamondon", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the ICSLP International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Dymetman, Julie Brousseau, George Foster, Pierre Isabelle, Yves Normandin, and Pierre Plam- ondon. 1994. Towards an automatic dictation sys- tem for translators: The TransTalk project. In Pro- ceedings of the ICSLP International Conference on Spoken Language Processing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The efficacy of human post-editing for language translation", "authors": [ { "first": "Spence", "middle": [], "last": "Green", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Heer", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "439--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Spence Green, Jeffrey Heer, and Christopher D Man- ning. 2013. The efficacy of human post-editing for language translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Sys- tems, pages 439-448. ACM.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "MMPE: A multimodal interface for post-editing machine translation", "authors": [ { "first": "Nico", "middle": [], "last": "Herbig", "suffix": "" }, { "first": "Tim", "middle": [], "last": "D\u00fcwel", "suffix": "" }, { "first": "Santanu", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Kalliopi", "middle": [], "last": "Meladaki", "suffix": "" }, { "first": "Mahsa", "middle": [], "last": "Monshizadeh", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Kr\u00fcger", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nico Herbig, Tim D\u00fcwel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Kr\u00fcger, and Josef van Genabith. 2020. MMPE: A multi- modal interface for post-editing machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multi-modal approaches for post-editing machine translation", "authors": [ { "first": "Nico", "middle": [], "last": "Herbig", "suffix": "" }, { "first": "Santanu", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Kr\u00fcger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nico Herbig, Santanu Pal, Josef van Genabith, and An- tonio Kr\u00fcger. 2019a. Multi-modal approaches for post-editing machine translation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, page 231. ACM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multi-modal indicators for estimating perceived cognitive load in post-editing of machine translation", "authors": [ { "first": "Nico", "middle": [], "last": "Herbig", "suffix": "" }, { "first": "Santanu", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Vela", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Kr\u00fcger", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Genabith", "suffix": "" } ], "year": 2019, "venue": "Machine Translation", "volume": "33", "issue": "1-2", "pages": "91--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nico Herbig, Santanu Pal, Mihaela Vela, Antonio Kr\u00fcger, and Josef Genabith. 2019b. Multi-modal in- dicators for estimating perceived cognitive load in post-editing of machine translation. Machine Trans- lation, 33(1-2):91-115.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SEECAT: ASR & eye-tracking enabled computer assisted translation", "authors": [ { "first": "Mercedes", "middle": [ "Garcia" ], "last": "Martinez", "suffix": "" }, { "first": "Karan", "middle": [], "last": "Singla", "suffix": "" }, { "first": "Aniruddha", "middle": [], "last": "Tammewar", "suffix": "" }, { "first": "Bartolom\u00e9", "middle": [], "last": "Mesa-Lao", "suffix": "" }, { "first": "Ankita", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Banglore", "middle": [], "last": "Ma Anusuya", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Srinivas", "suffix": "" }, { "first": "", "middle": [], "last": "Carl", "suffix": "" } ], "year": 2014, "venue": "The 17th Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mercedes Garcia Martinez, Karan Singla, Aniruddha Tammewar, Bartolom\u00e9 Mesa-Lao, Ankita Thakur, MA Anusuya, Banglore Srinivas, and Michael Carl. 2014. SEECAT: ASR & eye-tracking enabled com- puter assisted translation. In The 17th Annual Con- ference of the European Association for Machine Translation, pages 81-88. European Association for Machine Translation.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Speech-enabled computer-aided translation: A satisfaction survey with post-editor trainees", "authors": [ { "first": "Bartolom\u00e9", "middle": [], "last": "Mesa-Lao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation", "volume": "", "issue": "", "pages": "99--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bartolom\u00e9 Mesa-Lao. 2014. Speech-enabled computer-aided translation: A satisfaction sur- vey with post-editor trainees. In Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation, pages 99-103.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Assessing user interface needs of post-editors of machine translation", "authors": [ { "first": "Joss", "middle": [], "last": "Moorkens", "suffix": "" }, { "first": "Sharon O'", "middle": [], "last": "Brien", "suffix": "" } ], "year": 2017, "venue": "Human Issues in Translation Technology", "volume": "", "issue": "", "pages": "127--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joss Moorkens and Sharon O'Brien. 2017. Assessing user interface needs of post-editors of machine trans- lation. In Human Issues in Translation Technology, pages 127-148. Routledge.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Kanjingo -a mobile app for post-editing", "authors": [ { "first": "Joss", "middle": [], "last": "Sharon O'brien", "suffix": "" }, { "first": "Joris", "middle": [], "last": "Moorkens", "suffix": "" }, { "first": "", "middle": [], "last": "Vreeke", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 17th Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon O'Brien, Joss Moorkens, and Joris Vreeke. 2014. Kanjingo -a mobile app for post-editing. In Proceedings of the 17th Annual Conference of the European Association for Machine Translation.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Creating a multimodal translation tool and testing machine translation integration using touch and voice", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Carlos", "suffix": "" }, { "first": "Joss", "middle": [], "last": "Teixeira", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Moorkens", "suffix": "" }, { "first": "Joris", "middle": [], "last": "Turner", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Vreeke", "suffix": "" }, { "first": "", "middle": [], "last": "Way", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos S.C. Teixeira, Joss Moorkens, Daniel Turner, Joris Vreeke, and Andy Way. 2019. Creating a mul- timodal translation tool and testing machine transla- tion integration using touch and voice. Informatics, 6.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Post-editing effort of a novel with statistical and neural machine translation", "authors": [ { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "Martijn", "middle": [], "last": "Wieling", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" } ], "year": 2018, "venue": "Frontiers in Digital Humanities", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonio Toral, Martijn Wieling, and Andy Way. 2018. Post-editing effort of a novel with statistical and neu- ral machine translation. Frontiers in Digital Human- ities, 5:9.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Testing interaction with a mobile MT post-editing app", "authors": [ { "first": "Olga", "middle": [], "last": "Torres-Hostench", "suffix": "" }, { "first": "Joss", "middle": [], "last": "Moorkens", "suffix": "" }, { "first": "O'", "middle": [], "last": "Sharon", "suffix": "" }, { "first": "Joris", "middle": [], "last": "Brien", "suffix": "" }, { "first": "", "middle": [], "last": "Vreeke", "suffix": "" } ], "year": 2017, "venue": "Translation & Interpreting", "volume": "9", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olga Torres-Hostench, Joss Moorkens, Sharon O'Brien, Joris Vreeke, et al. 2017. Testing interac- tion with a mobile MT post-editing app. Translation & Interpreting, 9(2):138.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Improving CAT tools in the translation workflow: New approaches and evaluation", "authors": [ { "first": "Mihaela", "middle": [], "last": "Vela", "suffix": "" }, { "first": "Santanu", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Sudip", "middle": [], "last": "Kumar Naskar", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Machine Translation Summit XVII", "volume": "2", "issue": "", "pages": "8--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihaela Vela, Santanu Pal, Marcos Zampieri, Sudip Kumar Naskar, and Josef van Genabith. 2019. Improving CAT tools in the translation workflow: New approaches and evaluation. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 8-15.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Quantifying the influence of MT output in the translators' performance: A case study in technical translation", "authors": [ { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Vela", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation", "volume": "", "issue": "", "pages": "93--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcos Zampieri and Mihaela Vela. 2014. Quantifying the influence of MT output in the translators' perfor- mance: A case study in technical translation. In Pro- ceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation, pages 93-98.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Translation dictation vs. post-editing with cloud-based voice recognition: A pilot experiment", "authors": [ { "first": "Juli\u00e1n", "middle": [], "last": "Zapata", "suffix": "" }, { "first": "Sheila", "middle": [], "last": "Castilho", "suffix": "" }, { "first": "Joss", "middle": [], "last": "Moorkens", "suffix": "" } ], "year": 2017, "venue": "Proceedings of MT Summit XVI", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juli\u00e1n Zapata, Sheila Castilho, and Joss Moorkens. 2017. Translation dictation vs. post-editing with cloud-based voice recognition: A pilot experiment. Proceedings of MT Summit XVI, 2.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "User Perspective on Translation Tools: Findings of a User Survey", "authors": [ { "first": "Anna", "middle": [], "last": "Zaretskaya", "suffix": "" }, { "first": "M\u00edriam", "middle": [], "last": "Seghiri", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Zaretskaya and M\u00edriam Seghiri. 2018. User Per- spective on Translation Tools: Findings of a User Survey. Ph.D. thesis, University of Malaga.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Comparing postediting difficulty of different machine translation errors in Spanish and German translations from English", "authors": [ { "first": "Anna", "middle": [], "last": "Zaretskaya", "suffix": "" }, { "first": "Mihaela", "middle": [], "last": "Vela", "suffix": "" }, { "first": "Gloria", "middle": [ "Corpas" ], "last": "Pastor", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "Seghiri", "suffix": "" } ], "year": 2016, "venue": "International Journal of Language and Linguistics", "volume": "3", "issue": "3", "pages": "91--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Zaretskaya, Mihaela Vela, Gloria Corpas Pas- tor, and Miriam Seghiri. 2016. Comparing post- editing difficulty of different machine translation er- rors in Spanish and German translations from En- glish. International Journal of Language and Lin- guistics, 3(3):91-100.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Handwriting on left target view. (c) Touch reordering on right target view. (d) Screenshot of the interface." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Overview of the MMPE prototype." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Subjective ratings of the five modalities for the four operations on the 7-point Likert scales for goodness, ease of use, and whether it is a good alternative to MK." } } } }