{ "paper_id": "N19-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:01:50.354187Z" }, "title": "Lost in Interpretation: Predicting Untranslated Terminology in Simultaneous Interpretation", "authors": [ { "first": "Nikolai", "middle": [], "last": "Vogler", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "nikolaiv@cs.cmu.edu" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "gneubig@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Simultaneous interpretation, the translation of speech from one language to another in realtime, is an inherently difficult and strenuous task. One of the greatest challenges faced by interpreters is the accurate translation of difficult terminology like proper names, numbers, or other entities. Intelligent computerassisted interpreting (CAI) tools that could analyze the spoken word and detect terms likely to be untranslated by an interpreter could reduce translation error and improve interpreter performance. In this paper, we propose a task of predicting which terminology simultaneous interpreters will leave untranslated, and examine methods that perform this task using supervised sequence taggers. We describe a number of task-specific features explicitly designed to indicate when an interpreter may struggle with translating a word. Experimental results on a newly-annotated version of the NAIST Simultaneous Translation Corpus (Shimizu et al., 2014) indicate the promise of our proposed method. 1", "pdf_parse": { "paper_id": "N19-1010", "_pdf_hash": "", "abstract": [ { "text": "Simultaneous interpretation, the translation of speech from one language to another in realtime, is an inherently difficult and strenuous task. One of the greatest challenges faced by interpreters is the accurate translation of difficult terminology like proper names, numbers, or other entities. Intelligent computerassisted interpreting (CAI) tools that could analyze the spoken word and detect terms likely to be untranslated by an interpreter could reduce translation error and improve interpreter performance. In this paper, we propose a task of predicting which terminology simultaneous interpreters will leave untranslated, and examine methods that perform this task using supervised sequence taggers. We describe a number of task-specific features explicitly designed to indicate when an interpreter may struggle with translating a word. Experimental results on a newly-annotated version of the NAIST Simultaneous Translation Corpus (Shimizu et al., 2014) indicate the promise of our proposed method. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Simultaneous interpretation (SI) is the act of translating speech in real-time with minimal delay, and is crucial in facilitating international commerce, government meetings, or judicial settings involving non-native language speakers (Bendazzoli and Sandrelli, 2005; Hewitt et al., 1998) . However, SI is a cognitively demanding task that requires both active listening to the speaker and careful monitoring of the interpreter's own output. Even accomplished interpreters with years of training can struggle with unfamiliar concepts, fast-paced speakers, or memory constraints (Lambert and Moser-Mercer, 1994; Liu et al., 2004) . Human short-term memory is particularly at odds with the simultaneous interpreter as he or she must consistently recall and translate specific terminology uttered by the speaker (Lederer, 1978; Dar\u00f2 and Fabbro, 1994) . Despite psychological findings that rare words have long access times (Balota and Chumbley, 1985; Jescheniak and Levelt, 1994; Griffin and Bock, 1998) , listeners expect interpreters to quickly understand the source words and generate accurate translations. Therefore, professional simultaneous interpreters often work in pairs (Mill\u00e1n and Bartrina, 2012) ; while one interpreter performs, the other notes certain challenging items, such as dates, lists, names, or numbers (Jones, 2002) .", "cite_spans": [ { "start": 235, "end": 267, "text": "(Bendazzoli and Sandrelli, 2005;", "ref_id": "BIBREF1" }, { "start": 268, "end": 288, "text": "Hewitt et al., 1998)", "ref_id": "BIBREF14" }, { "start": 578, "end": 610, "text": "(Lambert and Moser-Mercer, 1994;", "ref_id": "BIBREF21" }, { "start": 611, "end": 628, "text": "Liu et al., 2004)", "ref_id": "BIBREF23" }, { "start": 809, "end": 824, "text": "(Lederer, 1978;", "ref_id": "BIBREF22" }, { "start": 825, "end": 847, "text": "Dar\u00f2 and Fabbro, 1994)", "ref_id": "BIBREF6" }, { "start": 920, "end": 947, "text": "(Balota and Chumbley, 1985;", "ref_id": "BIBREF0" }, { "start": 948, "end": 976, "text": "Jescheniak and Levelt, 1994;", "ref_id": "BIBREF17" }, { "start": 977, "end": 1000, "text": "Griffin and Bock, 1998)", "ref_id": "BIBREF12" }, { "start": 1178, "end": 1205, "text": "(Mill\u00e1n and Bartrina, 2012)", "ref_id": "BIBREF25" }, { "start": 1323, "end": 1336, "text": "(Jones, 2002)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Computers are ideally suited to the task of recalling items given their ability to store large amounts of information, which can be accessed almost instantaneously. As a result, there has been recent interest in developing computer-assisted interpretation (CAI; Plancqueel and Werner; Fantinuoli (2016 Fantinuoli ( , 2017b ) tools that have the ability to display glossary terms mentioned by a speaker, such as names, numbers, and entities, to an interpreter in a real-time setting. Such systems have the potential to reduce cognitive load on interpreters by allowing them to concentrate on fluent and accurate production of the target message.", "cite_spans": [ { "start": 262, "end": 284, "text": "Plancqueel and Werner;", "ref_id": null }, { "start": 285, "end": 301, "text": "Fantinuoli (2016", "ref_id": "BIBREF7" }, { "start": 302, "end": 322, "text": "Fantinuoli ( , 2017b", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These tools rely on automatic speech recognition (ASR) to transcribe the source speech, and display terms occurring in a prepared glossary. While displaying all terminology in a glossary achieves high recall of terms, it suffers from low precision. This could potentially have the unwanted effect of cognitively overwhelming the interpreter with too many term suggestions (Stewart et al., 2018) . Thus, an important desideratum of this technology is to only provide terminology Figure 1: The simultaneous interpretation process, which could be augmented by our proposed terminology tagger embedded in a computer-assisted interpreting interface on the interpreter's computer. In this system, automatic speech recognition transcribes the source speech, from which features are extracted, input into the tagger, and term predictions are displayed on the interface in real-time. Finally, machine translations of the terms can be suggested.", "cite_spans": [ { "start": 372, "end": 394, "text": "(Stewart et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "assistance when the interpreter requires it. For instance, an NLP tool that learns to predict only terms an interpreter is likely to miss could be integrated into a CAI system, as suggested in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 199, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce the task of predicting the terminology that simultaneous interpreters are likely to leave untranslated using only information about the source speech and text. We approach the task by implementing a supervised, sliding window, SVM-based tagger imbued with delexicalized features designed to capture whether words are likely to be missed by an interpreter. We additionally contribute new manual annotations for untranslated terminology on a seven talk subset of an existing interpreted TED talk corpus (Shimizu et al., 2014) . In experiments on the newly-annotated data, we find that intelligent term prediction can increase average precision over the heuristic baseline by up to 30%.", "cite_spans": [ { "start": 529, "end": 551, "text": "(Shimizu et al., 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before we describe our supervised model to predict untranslated terminology in SI, we first define the task and describe how to create annotated data for model training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Untranslated Terminology in SI", "sec_num": "2" }, { "text": "Formally, we define untranslated terminology with respect to a source sentence S, sentence created by a translator R, and sentence created by an interpreter I. Specifically, we define any consecutive sequence of words s i:j , where 0 \u2264 i \u2264 N \u2212 1 (inclusive) and i < j \u2264 N (exclusive), in source sentence S 0:N that satisfies the following criteria to be an untranslated term:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Untranslated Terminology", "sec_num": "2.1" }, { "text": "\u2022 Termhood: It consists of only numbers or nouns. We specifically focus on numbers or nouns for two reasons: (1) based on the interpretation literature, these categories contain items that are most consistently difficult to recall (Jones, 2002; Gile, 2009) , and (2) these words tend to have less ambiguity in their translations than other types of words, making it easier to have confidence in the translations proposed to interpreters.", "cite_spans": [ { "start": 231, "end": 244, "text": "(Jones, 2002;", "ref_id": "BIBREF19" }, { "start": 245, "end": 256, "text": "Gile, 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Untranslated Terminology", "sec_num": "2.1" }, { "text": "\u2022 Relevance: A translation of s i:j , we denote t, occurs in a sentence-aligned reference translation R produced by a translator in an offline setting. This indicates that in a timeunconstrained scenario, the term should be translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Untranslated Terminology", "sec_num": "2.1" }, { "text": "\u2022 Interpreter Coverage: It is not translated, literally or non-literally, by the interpreter in interpreter output I. This may reasonably allow us to conclude that translation thereof may have presented a challenge, resulting in the content not being conveyed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Untranslated Terminology", "sec_num": "2.1" }, { "text": "Importantly, we note that the phrase untranslated terminology entails words that are either dropped mistakenly, intentionally due to the interpreter deciding they are unnecessary to carry across the meaning, or mistranslated. We contrast this with literal and non-literal term coverage, which encompasses words translated in a verbatim and a paraphrastic way, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Untranslated Terminology", "sec_num": "2.1" }, { "text": "To obtain data with labels that satisfy the previous definition of untranslated terminology, we can leverage existing corpora containing sentencealigned source, translation, and simultaneous interpretation data. Several of these resources exist, such as the NAIST Simultaneous Translation Corpus (STC) (Shimizu et al., 2014) and the European Parliament Translation and Interpreting Corpus (EPTIC) (Bernardini et al., 2016) . Next, we process the source sentences, identifying terms that satisfy the termhood, relevance, and interpreter coverage criteria listed previously.", "cite_spans": [ { "start": 302, "end": 324, "text": "(Shimizu et al., 2014)", "ref_id": "BIBREF36" }, { "start": 397, "end": 422, "text": "(Bernardini et al., 2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Creating Term Annotations", "sec_num": "2.2" }, { "text": "\u2022 Termhood Tests: To check termhood for each source word in the input, we first partof-speech (POS) tag the input, then check the tag of the word and discard any that are not nouns or numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating Term Annotations", "sec_num": "2.2" }, { "text": "\u2022 Relevance and Interpreter Coverage Tests: Next, we need to measure relevancy (whether a corresponding target-language term appears in translated output), and interpreter coverage (whether a corresponding term does not appear in interpreted output). An approximation to this is whether one of the translations listed in a bilingual dictionary appears in the translated or interpreted outputs respectively, and as a first pass we identify all source terms with the corresponding targetlanguage translations. However, we found that this automatic method did not suffice to identify many terms due to lack of dictionary coverage and also to non-literal translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating Term Annotations", "sec_num": "2.2" }, { "text": "To further improve the accuracy of the annotations, we commissioned human translators to annotate whether a particular source term is translated literally, non-literally, or untranslated by the translator or interpreters (details given in \u00a74).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating Term Annotations", "sec_num": "2.2" }, { "text": "Once these inclusion criteria are calculated, we can convert all untranslated terms into an appropriate format conducive to training supervised taggers. In this case, we use an IO tagging scheme (Ramshaw and Marcus, 1999) where all words corresponding to untranslated terms are assigned . ", "cite_spans": [ { "start": 195, "end": 221, "text": "(Ramshaw and Marcus, 1999)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Creating Term Annotations", "sec_num": "2.2" }, { "text": "Interp \u30ab\u30ea\u30d5\u30a9\u30eb\u30cb\u30a2 \u3067 \u306f California \u3001 4 4 \u30d1\u30fc\u30bb\u30f3\u30c8 percent \u5c11\u306a \u304f \u306a \u3063 \u3066 decline \u3057\u307e \u3044 \u307e \u3057 \u305f \u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating Term Annotations", "sec_num": "2.2" }, { "text": "With supervised training data in hand, we can create a model for predicting untranslated terminology that could potentially be used to provide interpreters with real-time assistance. In this section, we outline a couple baseline models, and then describe an SVM-based tagging model, which we specifically tailor to untranslated terminology prediction for SI by introducing a number of handcrafted features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Untranslated Terminology", "sec_num": "3" }, { "text": "In order to compare with current methods for term suggestion in CAI, such as Fantinuoli (2017a), we first introduce a couple of heuristic baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Baselines", "sec_num": "3.1" }, { "text": "\u2022 Select noun/# POS tag: Our first baseline recalls all words that meet the termhood requirement from \u00a72. Thus, it will achieve perfect recall at the cost of precision, which will equal the percentage of I-tags in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Baselines", "sec_num": "3.1" }, { "text": "\u2022 Optimal frequency threshold: To increase precision over this naive baseline, we also experiment with a baseline that has a frequency threshold, and only output words that are rarer than this frequency threshold in a large web corpus, with the motivation that rarer words are more likely to be difficult for translators to recall and be left untranslated. likely to leave a term untranslated. We thus define these features, and resort to machine-learned classifiers to integrate them and improve performance. State-of-the-art sequence tagging models process sequences in both directions prior to making a globally normalized prediction for each item in the sequence (Huang et al., 2015; Ma and Hovy, 2016) . However, the streaming, realtime nature of simultaneous interpretation constrains our model to sequentially process data from left-to-right and make local, monotonic predictions (as noted in Oda et al. (2014) ; Grissom II et al. (2014) , among others). Therefore, we use a sliding-window, linear support vector machine (SVM) classifier (Cortes and Vapnik, 1995; Joachims, 1998 ) that uses only local features of the history to make independent predictions, as depicted in Fig. 3 . 2 Formally, given a sequence of source words with their side information (such as timings or POS tags) S = s 0:N , we slide a window W of size k incrementally across S, extracting features \u03c6(s i\u2212k+1:i+1 ) from s i and its k \u2212 1 predecessors. Since our definition of terminology only allows for nouns and numbers, we restrict prediction to words of the corresponding POS tags Q = {CD, NN, NNS, NNP, NNPS} using the Stanford POS tagger (Toutanova et al., 2003) . That is, we assign a POS tag p i to each word from s i and only extract features/predict using the classifier if p i \u2208 Q; otherwise we always assign the Outside tag. This dis- 2 We also experimented with a unidirectional LSTM tagger (Hochreiter and Schmidhuber, 1997; Graves, 2012) , but found it ineffective on our small amount of annotated data.", "cite_spans": [ { "start": 667, "end": 687, "text": "(Huang et al., 2015;", "ref_id": "BIBREF16" }, { "start": 688, "end": 706, "text": "Ma and Hovy, 2016)", "ref_id": "BIBREF24" }, { "start": 900, "end": 917, "text": "Oda et al. (2014)", "ref_id": "BIBREF28" }, { "start": 920, "end": 944, "text": "Grissom II et al. (2014)", "ref_id": "BIBREF13" }, { "start": 1045, "end": 1070, "text": "(Cortes and Vapnik, 1995;", "ref_id": "BIBREF5" }, { "start": 1071, "end": 1085, "text": "Joachims, 1998", "ref_id": "BIBREF18" }, { "start": 1624, "end": 1648, "text": "(Toutanova et al., 2003)", "ref_id": "BIBREF38" }, { "start": 1827, "end": 1828, "text": "2", "ref_id": null }, { "start": 1884, "end": 1918, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF15" }, { "start": 1919, "end": 1932, "text": "Graves, 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 1181, "end": 1187, "text": "Fig. 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Heuristic Baselines", "sec_num": "3.1" }, { "text": "allows words that are of other POS tags from being classified as untranslated terminology and greatly reduces the class imbalance issue when training the classifier. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM-based Tagging Model", "sec_num": "3.2" }, { "text": "Due to the fact that only a small amount of humaninterpreted human-annotated data can be created for this task, it is imperative that we give the model the precise information it needs to generalize well. To this end, we propose multiple taskspecific, non-lexical features to inform the classifier about certain patterns that may indicate terminology likely to be left untranslated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task-specific Features", "sec_num": "3.3" }, { "text": "\u2022 Elapsed time: As discussed in \u00a71, SI is a cognitively demanding task. Interpreters often work in pairs and usually swap between active duty and notetaking roles every 15-20 minutes (Lambert and Moser-Mercer, 1994) . Towards the end of talks or long sentences, an interpreter may become fatigued or face working memory issues-especially if working alone. Thus, we monitor the number of minutes elapsed in the talk and the index of the word in the talk/current sentence to inform the classifier.", "cite_spans": [ { "start": 183, "end": 215, "text": "(Lambert and Moser-Mercer, 1994)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Task-specific Features", "sec_num": "3.3" }, { "text": "\u2022 Word timing: We intuit that a presenter's quick speaking rate can cause the simultaneous interpreter to potentially drop some terminology. We obtain word timing informa-tion from the source speech via forced alignment tools (Ochshorn and Hawkins, 2016; Povey et al., 2011) . The feature function extracts both the number of words in the past m seconds and the time deltas between the current word and previous words in the window.", "cite_spans": [ { "start": 226, "end": 254, "text": "(Ochshorn and Hawkins, 2016;", "ref_id": "BIBREF27" }, { "start": 255, "end": 274, "text": "Povey et al., 2011)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Task-specific Features", "sec_num": "3.3" }, { "text": "\u2022 Word frequency: We anticipate that interpreters often leave rarer source words untranslated because they are probably more difficult to recall from memory. On the other hand, we would expect loan words, words adopted from a foreign language with little or no modification, to be easier to recognize and translate for an interpreter. We extract the binned unigram frequency of the current source word from the large monolingual Google Web 1T Ngrams corpus (Brants and Franz, 2006) . We define a loan word as an English word with a Katakana translation in the bilingual dictionaries (eij; Breen, 2004 ).", "cite_spans": [ { "start": 457, "end": 481, "text": "(Brants and Franz, 2006)", "ref_id": "BIBREF3" }, { "start": 589, "end": 600, "text": "Breen, 2004", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Task-specific Features", "sec_num": "3.3" }, { "text": "\u2022 Word characteristics and syntactic features: We extract the number of characters and number of syllables in the word, as determined by lookup in the CMU Pronunciation dictionary (Weide, 1998) . Numbers are converted to their word form prior to dictionary lookup. Generally, we expect longer words, both by character and syllable count, to represent more technical or marked vocabulary, which may be challenging to translate. Additionally, we syntactically inform the model with POS tags and regular expression patterns for numerals.", "cite_spans": [ { "start": 180, "end": 193, "text": "(Weide, 1998)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Task-specific Features", "sec_num": "3.3" }, { "text": "These features are extracted via sliding a window over the sentence, as displayed in Fig. 3 and discussed in \u00a73.2. Thus, we also utilize previous information from the window when predicting for the current word. This previous information includes past predictions, word characteristics and syntax, and source speech timing.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 91, "text": "Fig. 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Task-specific Features", "sec_num": "3.3" }, { "text": "In this section, we detail our application of the term annotation procedure in \u00a72 to an SI corpus and analyze our results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Annotation and Analysis", "sec_num": "4" }, { "text": "For SI data, we use a seven-talk, manually-aligned subset of the English-to-Japanese NAIST STC (Shimizu et al., 2014) , which consists of source subtitle transcripts, En\u2192Ja offline translations, and interpretations of English TED talk videos from professional simultaneous interpreters with 1, 4, and 15 years of experience, who are dubbed B-rank, A-rank, and S-rank 4 . TED talks offer a unique and challenging format for simultaneous interpreters because the speakers typically talk indepth about a single topic, and such there are many new terms that are difficult for an interpreter to process consistently and reliably. The prevalence of this difficult terminology presents an interesting testbed for our proposed method.", "cite_spans": [ { "start": 95, "end": 117, "text": "(Shimizu et al., 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation of NAIST STC", "sec_num": "4.1" }, { "text": "First, we use the Stanford POS Tagger (Toutanova et al., 2003) on the source subtitle transcripts to identify word chunks with a POS tag in {CD, NN, NNS, NNP, NNPS}, discarding words with other tags. After performing word segmentation on the Japanese data using KyTea (Neubig et al., 2011) , we automatically detect for translation coverage between the source subtitles, SI, and translator transcripts with a string-matching program, according to the relevance and coverage tests from \u00a72. The En\u2194Ja EIJIRO (2.1m entries) (eij) and EDICT (393k entries) (Breen, 2004) bilingual dictionaries are combined to provide term translations. Additionally, we construct individual dictionaries for each TED talk with key acronyms, proper names, and other exclusive terms (e.g., UN-ESCO, CO2, conflict-free, Pareto-improving) to increase this automatic coverage. Nouns are lemmatized prior to lookup in the bilingual dictionary, and we discard any remaining closed-class function words.", "cite_spans": [ { "start": 38, "end": 62, "text": "(Toutanova et al., 2003)", "ref_id": "BIBREF38" }, { "start": 268, "end": 289, "text": "(Neubig et al., 2011)", "ref_id": "BIBREF26" }, { "start": 552, "end": 565, "text": "(Breen, 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation of NAIST STC", "sec_num": "4.1" }, { "text": "While this automatic process is satisfactory for identifying if a translated term occurs in the translator's or interpreters' transcripts (relevancy), it is inadequate for verifying the terms that occur in the translator's transcript, but not the interpreters' outputs (interpreter coverage). Therefore, we commissioned seven professional translators to review and annotate those source terms that could not be marked as translated by the automatic process as either translated, untranslated, or non-literally translated in each target sentence. Lastly, we add I-tags to each word in the untranslated terms and O-tags to the words in literally and non-literally translated terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of NAIST STC", "sec_num": "4.1" }, { "text": "non-lit. raw untrans. # % # % # % T 2,213 80 158 6 14 B 1,134 41 92 3 1,546 56 A 1,151 42 114 4 1,507 54 S 1,531 55 170 6 1,071 39 Table 1 : Translated, non-literally translated, and raw untranslated term annotations obtained in the annotation process using the NAIST STC for (T)ranslator, and {B,A,S}-rank SI. Note that these raw untranslated term figures are directly from the annotation process, prior to filtering based off of the term relevancy constraint from \u00a72. Table 1 displays the term coverage annotation statistics for the translators and interpreters. Since translators performed in an offline setting without time constraints, they were able to translate the largest number of source terms into the target language, with 80% being literally translated, and 6% being non-literally translated. On the other hand, interpreters tend to leave many source terms uncovered in their translations. The A-rank and Brank interpreters achieve roughly the same level of term coverage, with the A-rank being only slightly more effective than B-rank at translating terms literally and non-literally. This is in contrast with Shimizu et al. (2014) 's automatic analysis of translation quality on a three-talk subset, in which Arank has slightly higher translation error rate and lower BLEU score (Papineni et al., 2002) than the B-rank interpreter. The most experienced S-rank interpreter leaves 17% fewer terms than B-rank uncovered in the translations. More interestingly, the number of non-literally translated terms also correlates with experience-level. In fact, the Srank interpreter actually exceeds the translator in the number of non-literal translations produced. Non-literal translations can occur when the interpreter fully comprehended the source expression, but chose to generate it in a way that better fit the translation in terms of fluency.", "cite_spans": [ { "start": 1137, "end": 1158, "text": "Shimizu et al. (2014)", "ref_id": "BIBREF36" }, { "start": 1307, "end": 1330, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 22, "end": 114, "text": "# % # % # % T 2,213 80 158 6 14 B 1,134 41 92 3 1,546 56 A 1,151 42 114 4 1,507", "ref_id": "TABREF2" }, { "start": 144, "end": 151, "text": "Table 1", "ref_id": null }, { "start": 483, "end": 490, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "trans.", "sec_num": null }, { "text": "In Table 2 , we show the number of terms left untranslated by each interpreter rank after processing our annotations for the relevancy constraint of \u00a72. Since the number of per-word I-tags is only slightly higher than the number of untranslated terms, most such terms consist of only a single % I-tag of SI # untrans. terms all noun/# B-rank 1,256 10.8 45.4 A-rank 1,206 10.4 43.6 S-rank 812 7.0 29.6 Table 2 : Final untranslated term count and number of Itags after filtering based off of the relevancy constraint ( \u00a72). That is, only the raw untranslated source terms that appear in the translator's transcript are truly considered untranslated. word of about 6.5 average characters for all ranks. Capitalized terms (i.e., named entities/locations) constitute about 14% of B-rank, 13% of A-rank, and 15% of S-rank terms. Numbers represent about 5% of untranslated terms for each rank.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": null }, { "start": 401, "end": 408, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Annotation Analysis", "sec_num": "4.2" }, { "text": "The untranslated term overlap between interpreters is visualized in Fig. 4 . Most difficult terms are shared amongst interpreter ranks as only 23.2% (B), 22.1% (A), and 11.7% (S) of terms are unique for each interpreter. We show a sampling of some unique noun terms on the outside of the Venn diagram, along with the untranslated terms shared among all ranks in the center. Among these unique terms, capitalized terms make up 19% of B-rank/S-rank, but only 13% of A-rank. 7.4% of S-rank's unique terms are numbers compared with about 5% for the other two ranks.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 74, "text": "Fig. 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Annotation Analysis", "sec_num": "4.2" }, { "text": "We design our experiments to evaluate both the effectiveness of a system to predict untranslated terminology in simultaneous interpretation and the usefulness of our features given the small amount Table 3 : Average precision score cross-validation results with feature ablation for the untranslated term class on test data. Optimal word frequency threshold is determined on dev set of each fold. Evaluation performed on a word-level. Highest numbers per column are bolded. Each setting is statistically significant at p < 0.05 by paired bootstrap (Koehn, 2004) . of aligned and labeled training data we possess.", "cite_spans": [ { "start": 548, "end": 561, "text": "(Koehn, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.1" }, { "text": "We perform leave-one-out cross-validation using five of the seven TED talks as the training set, one as the development set, and one as the test set. Hyperparameters (SVM's penalty term, the number of bins for the word frequency feature=9, and sliding window size=8) are tuned on the dev. fold and the best model, determined by average precision score, is used for the test fold predictions. Both training and predictions are performed on a sentence-level. During training, we weight the two classes inversely proportional to their frequencies in the training data to ensure that the majority Otag does not dominate the I-tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "5.1" }, { "text": "Since we are ultimately interested in the precision and recall trade-off among the methods, we evaluate our results using precision-recall curves in Fig. 5 and the average precision (AP) scores in Table 3 . AP 5 summarizes the precision-recall curve by calculating the weighted mean of the precisions at each threshold, where the weights are equal to the increase in recall from the previous threshold. If the method is embedded in a CAI system, then the user could theoretically adjust the precision-recall threshold to balance helpful term suggestions with cognitive load.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Fig. 5", "ref_id": "FIGREF6" }, { "start": 198, "end": 206, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.2" }, { "text": "Overall, we tend to see that all methods perform best when tested on data from the B-rank Select POS in the last 5 years we 've added 70000000 tons of co2 every 24 hours 25000000 tons every day to the oceans Optimal freq in the last 5 years we 've added 70000000 tons of co2 every 24 hours 25000000 tons every day to the oceans SVM in the last 5 years we 've added 70000000 tons of co2 every 24 hours 25000000 tons every day to the oceans interpreter, and observe a decline in performance across all methods with an increase in interpreter experience. We believe that this is due to a decrease in the number of untranslated terminology as experience increases (i.e., class imbalance) coupled with the difficulty of predicting such exclusive word occurrences from only source speech and textual cues. Ablation results in Table 3 show that not all of the features are able to improve classifier performance for all interpreters. While the elapsed time and word timing features tend to cause a degradation in performance when removed, ablating the word frequency and characteristic/syntax features can actually improve average precision score. Word frequency, which is a recallbased feature, seems to be more helpful for B-and S-rank interpreters because it is challenging to recall the smaller number of untranslated terms from the data. Although the characteristic/syntax features are also recall-based, we see a decline in performance for them across all interpreter ranks because they are simply too noisy. When ablating the uninformative features for each rank, the SVM is able to increase AP vs. the optimal word frequency baseline by about 20%, 15%, and 30% for the B, A, and S-rank interpreters, respectively.", "cite_spans": [], "ref_spans": [ { "start": 820, "end": 827, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.2" }, { "text": "In Table 4 , we show an example taken from the first test fold with results from each of the three methods. The SVM's increased precision is able to greatly reduce the number of false positives, which we argue could overwhelm the interpreter if left unfiltered and shown on a CAI system. Nevertheless, one of the most apparent false positive errors that still occurs with our method is on units following numbers, such as the word tons in the example. Also, because our model prioritizes avoiding this type I error, it is more susceptible to type II errors, such as ignoring untranslated terms 24 and day. A user study with our method embedded in a CAI would reveal the true costs of these different errors, but we leave this to future work.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5.2" }, { "text": "In this paper, we introduce the task of automatically predicting terminology likely to be left untranslated in simultaneous interpretation, create annotated data from the NAIST ST corpus, and propose a sliding window, SVM-based tagger with task-specific features to perform predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "We plan to assess the effectiveness of our approach in the near future by integrating it in a heads-up display CAI system and performing a user study. In this study, we hope to discover the ideal precision and recall tradeoff point regarding cognitive load in CAI terminology assistance and use this feedback to adjust the model. Other future work could examine the effectiveness of the approach in the opposite direction (Japanese to English) or on other language pairs. Additionally, speech features could be extracted from the source or interpreter audio to reduce the dependence on a strong ASR system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "We note that a streaming POS tagger would have to be used in a real-time setting, as in(Oda et al., 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "{B, A, S}-rank is the Japanese equivalent to {C, B, A}rank on the international scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We compute AP using the scikit-learn implementation(Pedregosa et al., 2011).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1745016 and National Science Foundation EAGER under Grant No. 1748642. We would like to thank Jordan Boyd-Graber, Hal Daum\u00e9 III and Leah Findlater for helpful discussions, Arnav Kumar for assistance with the term annotation interface, and the anonymous reviewers for their useful feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The locus of word-frequency effects in the pronunciation task: Lexical access and/or production?", "authors": [ { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "James", "middle": [ "I" ], "last": "Balota", "suffix": "" }, { "first": "", "middle": [], "last": "Chumbley", "suffix": "" } ], "year": 1985, "venue": "Journal of Memory and Language", "volume": "24", "issue": "1", "pages": "89--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A Balota and James I Chumbley. 1985. The lo- cus of word-frequency effects in the pronunciation task: Lexical access and/or production? Journal of Memory and Language, 24(1):89-106.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An approach to corpus-based interpreting studies: developing EPIC (European Parliament Interpreting Corpus)", "authors": [ { "first": "Claudio", "middle": [], "last": "Bendazzoli", "suffix": "" }, { "first": "Annalisa", "middle": [], "last": "Sandrelli", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the EU-High-Level Scientific Conference Series MuTra 2005-Challenges of Multidimensional Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudio Bendazzoli and Annalisa Sandrelli. 2005. An approach to corpus-based interpreting studies: de- veloping EPIC (European Parliament Interpreting Corpus). In Proceedings of the EU-High-Level Sci- entific Conference Series MuTra 2005-Challenges of Multidimensional Translation.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "From EPIC to EPTIC -exploring simplification in interpreting and translation from an intermodal perspective", "authors": [ { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Mili\u010devi\u0107", "suffix": "" } ], "year": 2016, "venue": "Target. International Journal of Translation Studies", "volume": "28", "issue": "1", "pages": "61--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silvia Bernardini, Adriano Ferraresi, and Maja Mili\u010devi\u0107. 2016. From EPIC to EPTIC -exploring simplification in interpreting and translation from an intermodal perspective. Target. International Jour- nal of Translation Studies, 28(1):61-86.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Web 1T 5-gram version 1", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Franz", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram version 1.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Jmdict: a japanese-multilingual dictionary", "authors": [ { "first": "James", "middle": [], "last": "Breen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Multilingual Linguistic Resources", "volume": "", "issue": "", "pages": "71--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Breen. 2004. Jmdict: a japanese-multilingual dictionary. In Proceedings of the Workshop on Mul- tilingual Linguistic Resources, pages 71-79. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Supportvector networks", "authors": [ { "first": "Corinna", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine learning", "volume": "20", "issue": "3", "pages": "273--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning, 20(3):273-297.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Verbal memory during simultaneous interpretation: Effects of phonological interference", "authors": [ { "first": "Valeria", "middle": [], "last": "Dar\u00f2", "suffix": "" }, { "first": "Franco", "middle": [], "last": "Fabbro", "suffix": "" } ], "year": 1994, "venue": "Applied Linguistics", "volume": "15", "issue": "4", "pages": "365--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valeria Dar\u00f2 and Franco Fabbro. 1994. Verbal mem- ory during simultaneous interpretation: Effects of phonological interference. Applied Linguistics, 15(4):365-381.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Interpretbank. redefining computer-assisted interpreting tools", "authors": [ { "first": "Claudio", "middle": [], "last": "Fantinuoli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Translating and the Computer 38 Conference in London", "volume": "", "issue": "", "pages": "42--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudio Fantinuoli. 2016. Interpretbank. redefining computer-assisted interpreting tools. In Proceedings of the Translating and the Computer 38 Conference in London, pages 42-52.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Computer-assisted interpreting: challenges and future perspectives", "authors": [ { "first": "Claudio", "middle": [], "last": "Fantinuoli", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudio Fantinuoli. 2017a. Computer-assisted inter- preting: challenges and future perspectives. Trends in E-Tools and Resources for Translators and Inter- preters, page 153.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speech recognition in the interpreter workstation. Translating and the Computer 39", "authors": [ { "first": "Claudio", "middle": [], "last": "Fantinuoli", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudio Fantinuoli. 2017b. Speech recognition in the interpreter workstation. Translating and the Com- puter 39, page 25.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Basic concepts and models for interpreter and translator training", "authors": [ { "first": "Daniel", "middle": [], "last": "Gile", "suffix": "" } ], "year": 2009, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gile. 2009. Basic concepts and models for in- terpreter and translator training, volume 8. John Benjamins Publishing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sequence transduction with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1211.3711" ] }, "num": null, "urls": [], "raw_text": "Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Constraint, word frequency, and the relationship between lexical processing levels in spoken word production", "authors": [ { "first": "M", "middle": [], "last": "Zenzi", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "", "middle": [], "last": "Bock", "suffix": "" } ], "year": 1998, "venue": "Journal of Memory and Language", "volume": "38", "issue": "3", "pages": "313--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zenzi M Griffin and Kathryn Bock. 1998. Constraint, word frequency, and the relationship between lexical processing levels in spoken word production. Jour- nal of Memory and Language, 38(3):313-338.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Don't until the final verb wait: Reinforcement learning for simultaneous machine translation", "authors": [ { "first": "Alvin", "middle": [], "last": "Grissom", "suffix": "" }, { "first": "I", "middle": [ "I" ], "last": "", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "John", "middle": [], "last": "Morgan", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "1342--1352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum\u00e9 III. 2014. Don't until the final verb wait: Reinforcement learning for simulta- neous machine translation. pages 1342-1352.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Court interpreting services in state and federal courts: Reasons and options for inter-court coordination", "authors": [ { "first": "W", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Hannaford", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Gill", "suffix": "" }, { "first": "Melissa", "middle": [], "last": "Cantrell", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W Hewitt, Paula Hannaford, Catherine Gill, and Melissa Cantrell. 1998. Court interpreting services in state and federal courts: Reasons and options for inter-court coordination. National Center for State Courts.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bidirectional LSTM-CRF models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Word frequency effects in speech production: Retrieval of syntactic information and of phonological form", "authors": [ { "first": "D", "middle": [], "last": "J\u00f6rg", "suffix": "" }, { "first": "Willem Jm", "middle": [], "last": "Jescheniak", "suffix": "" }, { "first": "", "middle": [], "last": "Levelt", "suffix": "" } ], "year": 1994, "venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition", "volume": "20", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg D Jescheniak and Willem JM Levelt. 1994. Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(4):824.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Text categorization with support vector machines: Learning with many relevant features", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1998, "venue": "ECML-98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many rel- evant features. In ECML-98.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Conference interpreting explained", "authors": [ { "first": "Roderick", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2002, "venue": "", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roderick Jones. 2002. Conference interpreting ex- plained, volume 6. St Jerome Pub.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. pages 388-395.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bridging the gap: Empirical research in simultaneous interpretation", "authors": [ { "first": "Sylvie", "middle": [], "last": "Lambert", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Moser-Mercer", "suffix": "" } ], "year": 1994, "venue": "John Benjamins Publishing", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sylvie Lambert and Barbara Moser-Mercer. 1994. Bridging the gap: Empirical research in simultane- ous interpretation, volume 3. John Benjamins Pub- lishing.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Simultaneous interpretation-units of meaning and other features", "authors": [ { "first": "Marianne", "middle": [], "last": "Lederer", "suffix": "" } ], "year": 1978, "venue": "Language interpretation and communication", "volume": "", "issue": "", "pages": "323--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marianne Lederer. 1978. Simultaneous interpretation-units of meaning and other features. In Language interpretation and communication, pages 323-332. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Working memory and expertise in simultaneous interpreting. Interpreting", "authors": [ { "first": "Minhua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Diane", "middle": [ "L" ], "last": "Schallert", "suffix": "" }, { "first": "Patrick J", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2004, "venue": "", "volume": "6", "issue": "", "pages": "19--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minhua Liu, Diane L Schallert, and Patrick J Carroll. 2004. Working memory and expertise in simultane- ous interpreting. Interpreting, 6(1):19-42.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.01354" ] }, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Routledge handbook of translation studies", "authors": [ { "first": "Carmen", "middle": [], "last": "Mill\u00e1n", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Bartrina", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Mill\u00e1n and Francesca Bartrina. 2012. The Routledge handbook of translation studies. Rout- ledge.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Pointwise prediction for robust, adaptable japanese morphological analysis", "authors": [ { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yosuke", "middle": [], "last": "Nakata", "suffix": "" }, { "first": "Shinsuke", "middle": [], "last": "Mori", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", "volume": "2", "issue": "", "pages": "529--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies: short papers-Volume 2, pages 529-533. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Gentle: A forced aligner", "authors": [ { "first": "Robert", "middle": [], "last": "Ochshorn", "suffix": "" }, { "first": "Max", "middle": [], "last": "Hawkins", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Ochshorn and Max Hawkins. 2016. Gentle: A forced aligner.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Optimizing segmentation strategies for simultaneous speech translation", "authors": [ { "first": "Yusuke", "middle": [], "last": "Oda", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "551--556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimiz- ing segmentation strategies for simultaneous speech translation. pages 551-556.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Syntax-based simultaneous translation through prediction of unseen syntactic constituents", "authors": [ { "first": "Yusuke", "middle": [], "last": "Oda", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "198--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Syntax-based simultaneous translation through prediction of un- seen syntactic constituents. pages 198-207.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. pages 311-318.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The Kaldi speech recognition toolkit", "authors": [ { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Arnab", "middle": [], "last": "Ghoshal", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Boulianne", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Glembek", "suffix": "" }, { "first": "Nagendra", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Hannemann", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Motlicek", "suffix": "" }, { "first": "Yanmin", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Schwarz", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi speech recognition toolkit. pages 1-4.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "A", "middle": [], "last": "Lance", "suffix": "" }, { "first": "Mitchell P", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1999, "venue": "Natural language processing using very large corpora", "volume": "", "issue": "", "pages": "157--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large cor- pora, pages 157-176. Springer.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Collection of a simultaneous translation corpus for comparative analysis", "authors": [ { "first": "Hiroaki", "middle": [], "last": "Shimizu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "670--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroaki Shimizu, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Collec- tion of a simultaneous translation corpus for compar- ative analysis. In LREC, pages 670-673. Citeseer.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Automatic Estimation of Simultaneous Interpreter Performance", "authors": [ { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Nikolai", "middle": [], "last": "Vogler", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.04016" ] }, "num": null, "urls": [], "raw_text": "Craig Stewart, Nikolai Vogler, Junjie Hu, Jordan Boyd- Graber, and Graham Neubig. 2018. Automatic Es- timation of Simultaneous Interpreter Performance. arXiv preprint arXiv:1805.04016.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 173-180. Association for Compu- tational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The CMU pronunciation dictionary, release 0.6", "authors": [ { "first": "Robert", "middle": [], "last": "Weide", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Weide. 1998. The CMU pronunciation dictio- nary, release 0.6. Carnegie Mellon University.", "links": null } }, "ref_entries": { "FIGREF2": { "num": null, "text": "A source sentence and its corresponding interpretation. Untranslated terms are surrounded by brackets and each word in the term is labeled with an I-tag. The interpreter mistakes the term 40 for 4, and omits Sierra snowpack.the label I, and all others are assigned a label O, as shown inFig. 2.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Our tagging model at prediction time. A sliding window SVM, informed by a task-specific feature function \u03c6 with access to the POS tags, source speech timing (in seconds), and other information, predicts whether or not words matching the termhood constraint (in blue) are likely to be left untranslated in SI.", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "Untranslated term overlap between interpreters.", "type_str": "figure", "uris": null }, "FIGREF6": { "num": null, "text": "Precision-recall curves for each interpreter rank.", "type_str": "figure", "uris": null }, "TABREF0": { "text": "While these baselines are simple and intuitive, we argue that there are a large number of other features that indicate whether an interpreter is", "content": "
IOOII
SVMSVMSVMSVMSVM
\u03d5\u03d5\u03d5\u03d5\u03d5
words. . .40percentdecline in theSierra snowpack
CDNNNNINDTNNPNN
side info178.13 . . .178.44 . . .178.85 . . .179.33 . . .179.43179.55 . . .179.96 . . .
", "num": null, "type_str": "table", "html": null }, "TABREF2": { "text": "B-rank output from our model contrasted with baselines. Type I errors are in red, type II errors in orange, and correctly tagged untranslated terminology in blue.", "content": "", "num": null, "type_str": "table", "html": null } } } }