{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:38.342764Z" }, "title": "BIT's system for AutoSimTrans 2021", "authors": [ { "first": "Mengge", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Shuoying", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Minqin", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Zhipeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Yuhang", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Institute of Technology", "location": { "settlement": "Beijing", "country": "China" } }, "email": "guoyuhang@bit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we introduce our Chinese-English simultaneous translation system participating in AutoSimTrans 2021. In simultaneous translation, translation quality and latency are both important. In order to reduce the translation latency, we cut the streaming-input source sentence into segments and translate the segments before the full sentence is received. In order to obtain high-quality translations, we pretrain a translation model with adequate corpus and fine-tune the model with domain adaptation and sentence length adaptation. The experimental results on the development dataset show that our system performs better than the baseline system.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper we introduce our Chinese-English simultaneous translation system participating in AutoSimTrans 2021. In simultaneous translation, translation quality and latency are both important. In order to reduce the translation latency, we cut the streaming-input source sentence into segments and translate the segments before the full sentence is received. In order to obtain high-quality translations, we pretrain a translation model with adequate corpus and fine-tune the model with domain adaptation and sentence length adaptation. The experimental results on the development dataset show that our system performs better than the baseline system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation greatly facilitates communication between people of different language, and the current neural machine translation model has achieved great success in machine translation field. However, for some occasions that have higher requirements for translation speed, such as in simultaneous interpretation dynamic subtitles and dynamic subtitles application fields. Machine translation models that use full sentences as translation units need to wait for the speaker to speak the full sentence before starting translation, in which the translation delay is unacceptable. In order to reduce the delay, translation must start before the complete sentence is received. But at the same time the incomplete sentence may have grammatical errors and semantic incompleteness, and the translation quality will decrease compared to the result obtained by full sentences. Further more, different languages may have different word order. There are also many reordering phenomenon when translating between Chinese and English which both belong to the same SVO sentence structure. Sentence reordering and different word-order expression habits bring a great difficult to simultaneous translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since the latency of using a full sentence as translation unit is unacceptable, and the translation of incomplete sentences is difficult and not guaranteed to obtain reliable translations, we consider cutting long sentence into appropriate sub-sentences. And each sub-sentence is grammatically correct and semantically complete to get suitable translation result. By decomposing translating long sentences into translating shorter sub-sentences, the translation can be started before the complete long sentence is received. This strategy of achieving low-latency simultaneous translation can be summarized as segmentation strategy (Rangarajan Sridhar et al., 2013) . At the same time, it is observed that a sentence can be divided into independent sub-sentences for translation. For the example in table 1, Chinese and English sentences can be cut, and the Chinese sub-sentences can be translated as a shorter translation unit. According to this example, we can also observe that there is no cross alignment between the two sub-sentences, that is, the English translation of the first Chinese subsentence has no semantic and word connections with the translation of second Chinese sub-sentence, and there is no cross word alignment between the two sub-sentences. This phenomenon indicates that it is feasible to divide the full sentence in the parallel corpus into shorter sub-sentences.", "cite_spans": [ { "start": 631, "end": 664, "text": "(Rangarajan Sridhar et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following of this paper, the second part will introduce the overall framework of the model, the third part will give a detailed description of the fine-tuning, finally will ex- Target sentence Ladies and gentlemen , dear friend s good morning . plain and analysis the experiment results.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 227, "text": "Target sentence Ladies and gentlemen , dear", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This part mainly introduces the overall framework of our submission in AutoSimulTrans 2021 competition. The whole model uses typical segmentation strategy to achieve simultaneous translation. It consists of a sentence boundary detector and a machine translation module. The sentence boundary detector reads the streaming input text and obtains the appropriate segments. The segments are input to the downstream translation module, and the translation result of each segment is obtained and then spliced to obtain the full translation. The overall framework of the entire model is shown in the figure 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "2" }, { "text": "The sentence boundary detector can also be regarded as a text classifier. For the streaminginput sentence, detector needs to be able to judge whether the received part can be used as a suitable segment to be translated. The specific implementation of the boundary detector is based on a pre-trained Chinese BERT (Devlin et al. (2018) ) model as a text representation, add a fully connected layer to form a classifier. In terms of data, long sentences are divided into segments according to punctuation marks, segments are regarded as sub-sentences. Positive and negative examples are constructed according to such rules to finetune the pre-trained model to obtain a classifier achieving an accuracy of 92.5%. According to the above processes, a boundary detector that can process streaming input text is constructed.", "cite_spans": [ { "start": 312, "end": 333, "text": "(Devlin et al. (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Boundary Detector", "sec_num": "2.1" }, { "text": "The translation module is implemented with the tensor2tensor framework, training the transformer-big model (Vaswani et al., 2017) as a machine translation module. We use the pretraining and fine-tuning method to get better performance on the target task.", "cite_spans": [ { "start": 107, "end": 129, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Module", "sec_num": "2.2" }, { "text": "First, we use the CWMT19 data set as a large-scale corpus to pre-train machine translation model. The CWMT19 corpus is a standard Chinese and English text corpus, but the target test set in the competition is the speech transcription and translation results, which have domain difference with the standard text. So it is necessary to use speech domain corpus to fine-tune the translation model. On the other hand, the translator needs to translate the sub-sentences when decoding. There is a mismatch between the length and the amount of information between the sub-sentence and the longer full sentences. So we further finetune the translation model to make it adapted to sub-sentences translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Module", "sec_num": "2.2" }, { "text": "3 Fine-tuning Corpus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Module", "sec_num": "2.2" }, { "text": "In order to make the machine translation model trained on the standard text corpus more suitable for translating the transcriptions in the speech field, the translation model needs to be fine-tuned with the corpus of the corresponding speech field. We use the manual transcription and translation text of the Chinese speech provided by the organizer as parallel corpus to fine-tune the pre-training translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain fine-tuning", "sec_num": "3.1" }, { "text": "The pre-training and domain fine-tuning processes only train the translation model on the full sentence corpus. But when the model is used to perform the simultaneous translation and decoding process, the sub-sentences are needed to be translated, which causes mismatch between training and testing. In order to make the machine translation model adapt to the shorter sub-sentences translation sence, it is necessary to construct a sub-sentence corpus composed of Chinese and English subsentence pairs to further fine-tune the machine translation model. In order to meet the requirements of domain adaptation at the same time, sub-sentence corpus is constructed based Figure 1 : System Architecture on the Chinese-English corpus provided by the organizer to fine-tune the machine translation model to adapt to the sub-sentence translation scenario. The following is a detailed description of the specific method of processing the full sentence into a sub-sentences.", "cite_spans": [], "ref_spans": [ { "start": 668, "end": 676, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Sentence length fine-tuning", "sec_num": "3.2" }, { "text": "The ideal sentence segmentation effect is that if the Chinese and English sentence pairs are divided into two or more sub-sentence pairs, Chinese sentence and the English sentence should be cut at the same time to obtain the same number of sub-sentences, and corresponding Chinese and English sub-sentences should contain same information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence length fine-tuning", "sec_num": "3.2" }, { "text": "In another word, using Chinese sub-sentence can get enough information to translate the corresponding English sub-sentence. In order to meet the requirements of information integrity, we use the word alignment tool to obtain the word alignment information between Chinese and English sentence pairs, using the fast_align (Dyer et al., 2013) word alignment tool to obtain Chinese to English and English to Chinese alignments respectively, and merge them into symmetry alignments. The result of word alignment, such as the Chinese input sentence X = {x 1 , x 2 , ..., x n } and the target English sentence Y = {y 1 , y 2 , ..., y m } , we can get a set of alignment results", "cite_spans": [ { "start": 321, "end": 340, "text": "(Dyer et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence length fine-tuning", "sec_num": "3.2" }, { "text": "A = {< x i , y j > | x i \u2208 X, y j \u2208 Y }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence length fine-tuning", "sec_num": "3.2" }, { "text": "Then, the word alignment matrix is obtained according to the word alignment results. The segmentation of the Chinese and English full-sentence pairs is equivalent to the division of the word alignment matrix. The word alignment matrix can be divided into four blocks according to a division position, when the lower left and upper right matrices are both zero matrices, meaning that two subsentences do not have cross-word alignment. And sub-sentences can be obtained at the current segmentation position. Moreover, the traversal-based division algorithm can divide a sentence with multiple suitable methods, effectively increasing the number of sub-sentence pairs in the sub-sentence corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence length fine-tuning", "sec_num": "3.2" }, { "text": "An example of sentence segmentation using word alignment matrix is shown in the figure 2. According to the alignment results of Chinese and English words, an alignment matrix is constructed. The position is '1' means the Chinese word and English word have alignment and the remaining position have no alignment. Two dashed boxes are identified in the figure, corresponding to two reasonable division results. The dashed box is the first sub-sentence and remain part is second sub-sentence. We retain all reasonable fragmentation results when segmenting sentences, that is, both segmentation results in the figure will be retained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence length fine-tuning", "sec_num": "3.2" }, { "text": "The boundary detector is based on the pretraining BERT of chinese_L-12_H-768_A-12 as the pre-training model, the hidden size of fully connected layer is the same of BERT. Using the simultaneous interpretation corpus provided by the organizer, cutting into sub- sentences based on punctuation, constructing positive and negative examples for fine-tuning training. Then we obtain a sentence boundary recognizer that can recognize sentence boundaries and realize real-time segmentation of streaming input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment settings", "sec_num": "4.1" }, { "text": "Our translation model is based on the tensor2tensor framework. We set the parameters of the model as transformer_big. And we set the parameter problem as trans-late_enzh_wmt32k_rev. We train the model on 6 GPUs for 9 days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment settings", "sec_num": "4.1" }, { "text": "In experiment, we pre-train translator on CWMT19 dataset, fine-tune translator on BSTC (Zhang et al., 2021) dataset, and evaluate model on BSTC development dataset containing transcription and translation of 16 speeches. CWMT19 is a standard text translation corpus. BSTC contains 68h Chinese speech and corresponding Chinese transcription and English translation text. In this article, we only use Chinese and English texts in the speech field.", "cite_spans": [ { "start": 87, "end": 107, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment settings", "sec_num": "4.1" }, { "text": "In terms of domain adaptability, we use golden transcribed text as fine-tuning corpus. In terms of sentence length adaptability, we use corpus containing only golden transcriptions and corpus containing ASR and golden transcriptions to construct sub-sentence corpus, and use boundary detector as a filter to remove some unsuitable sub-sentence. The situation of fine-tuning corpus is shown in the table 2. The same sentence boundary detector is used by all model, and different machine translation modules are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "-domain fine-tuned:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "pre-trained on CWMT19 corpus, and fine-tuned on golden transcription.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "-sub-sentence fine-tuned(golden+ASR): based on domain fine-tuned model, fine-tuned by segmented golden&ASR transcription corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "-sub-sentence fine-tuned(golden): based on domain fine-tuned model, fine-tuned by segmented golden transcription corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "-sub-sentence fine-tuned(filtered golden): based on domain fine-tuned model, fine-tuned by filtered segmented golden transcription corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "Learning rate is set as 2e-5 in fine-tuning, domain fine-tuning is carried out for 2000 steps and segmentation fine-tuning is carried out for 4000 steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sub-sentence fine-tuning", "sec_num": "4.2" }, { "text": "Here is the definition of AL latency metric as used in (Ma et al., 2018) . t is decoding step, \u03c4 is cut-off decoding step where source sentence is finished, g(t) denote the number of source words read by encoder at decoding step t, and r = |x|/|y| is target-to-source length ratio. The lower AL value means lower latency and better real-time simultaneous system.", "cite_spans": [ { "start": 55, "end": 72, "text": "(Ma et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Latency metric", "sec_num": "4.3" }, { "text": "AL = 1 \u03c4 \u03c4 \u2211 t=1 g(t) \u2212 t \u2212 1 \u03b3 \u03c4 = arg min t [g(t) = |x|]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latency metric", "sec_num": "4.3" }, { "text": "The performance of each model on the development set is list in table 3.According to the Fine-tuning corpus Type Sentence Pairs golden transcription full-sentence 37k segmented golden&ASR transcription sub-sentence 2555k segmented golden transcription sub-sentence 668k segmented golden(filtered) transcription sub-sentence 246k Table 2 : First full-sentence corpus is provided by organizer. Three sub-sentence corpus constructed by word alignment, constructed from golden and ASR transcription corpus provided by organizer. The third line is the filtered segmentation corpus.", "cite_spans": [], "ref_spans": [ { "start": 329, "end": 336, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.4" }, { "text": "experimental results, the performance of the fine-tuning model did not meet expectations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.4" }, { "text": "Using only the corpus made by golden transcription corpus brought a greater quality reduction compared to using corpus including the ASR and golden transcriptions. Comparing with models fine-tuned by golden transcription and model fine-tuned by filtered golden transcription, we can find that although the number of sentences in sub-sentences corpus has decreased after filtering, it has obtained a relatively high score, which reflects the effectiveness of the filtering operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.4" }, { "text": "The main reason for the unsatisfactory finetuning effect may because the sub-sentence corpus contains too much noise. It may be difficult to obtain high-quality segmentation results by the word alignment results. Although we have filtered many inappropriate sentences, there is still a lot of noise in the sub-sentence corpus. And because the sub-sentences are shorter, the translation errors of the sentence pair in fine-tuning corpus will have a greater negative impacts on translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.4" }, { "text": "Here is an example to explain the difficulty of sentence division. In the sentence showed in table 4, we list the source sentence and target sentence, and also direct translation for each phrase just for understanding the meaning of Chinese words. From the perspective of word alignment, it can be easily divided from the comma position to obtain two sub-sentences. For the first sub-sentence pair, the Chinese and English sub-sentences contain same information, and good English translation results can be easily obtained according to Chinese. But for the second sub-sentence pair, it's hard to obtain golden translation relay only on Chinese sub-sentence. If you directly translate the Chinese, you may get a translation result similar to \"amazing by hearing. \". This is because the result of golden translation is obtained with full sentence, and in order to make the translated English expression more fluent, free translation is carried out. If the translation model only reads the second sub-sentence, it is difficult to obtain a suitable translation result relative to the golden result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and analysis", "sec_num": "4.4" }, { "text": "This article uses segmentation strategy to achieve low-latency simultaneous translation. There are also some similar works use segmentation strategy to divide long sentences into segments for translation, (Xiong et al., 2019) focus on improving the coherence of the subsentences translation results, (Zhang et al., 2020) focus on solving the problem of longdistance reordering in simultaneous translation.", "cite_spans": [ { "start": 205, "end": 225, "text": "(Xiong et al., 2019)", "ref_id": "BIBREF9" }, { "start": 300, "end": 320, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "In addition, there are two different strategies for achieving simultaneous translation: one is a more flexible translation strategy based on sentence prefixes. The process of simultaneous translation is defined as a readwrite action sequence from the perspective of behavior. It is necessary to define a suitable strategy to find out the action sequence, and adjust the translator to make the model more suitable for the translation of sentence prefixes (Ma et al., 2018) (Arivazhagan et al., 2019) . Another type is translation based on dynamic refresh without the need to adjust the machine translation model. Whenever the input increases, translate all input and overwrite the translation result that has been generated last time (Niehues et al., 2016) (Arivazhagan et al., 2020b) (Arivazhagan et al., 2020a) .", "cite_spans": [ { "start": 454, "end": 471, "text": "(Ma et al., 2018)", "ref_id": "BIBREF5" }, { "start": 472, "end": 498, "text": "(Arivazhagan et al., 2019)", "ref_id": "BIBREF0" }, { "start": 733, "end": 755, "text": "(Niehues et al., 2016)", "ref_id": "BIBREF6" }, { "start": 756, "end": 783, "text": "(Arivazhagan et al., 2020b)", "ref_id": "BIBREF2" }, { "start": 784, "end": 811, "text": "(Arivazhagan et al., 2020a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "In this paper we describe a simultaneous translation method that reduces translation Model AL BLEU domain fine-tuned 7.467 19.45 sub-sentence fine-tuned(golden+ASR) 7.478 19.02 sub-sentence fine-tuned(golden) 7.823 16.28 sub-sentence fine-tuned(filtered golden) 7.795 16.67 Table 3 : Performance of each model on the development set. AL is latency metric and BLEU is text quality metric.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "These things are all nature's amazing creations , amazing by hearing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source sentence Literal translation", "sec_num": null }, { "text": "These are all amazing creations of the nature , you can tell just from their names . Table 4 : A example hard to segment. The sentence can be segmented by comma. The literal translation of second sub-sentence is quite different from the target.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Target sentence", "sec_num": null }, { "text": "delay by cutting the full sentence into subsentences.We fine-tune a pre-trained translation model in terms of domain and sentence length. The sub-sentence corpus is constructed by word alignment, we found that directly using all the sub-sentences we obtained has a negative impact on translation performance, but it can be improved after filtering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target sentence", "sec_num": null }, { "text": "In the end, we obtained translation results that exceeded the baseline model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target sentence", "sec_num": null } ], "back_matter": [ { "text": "Supported by the National Key Research and Development Program of China (No. 2016YFB0801200)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "The results of each model on the development set are shown in the figure 3, where each curve of wait-1, wait-3, wait-5 and full-sent is the wait-k series model and full-sentence model provided by the organizer. Each model is a transformer neural machine translation model. Each scattered point represents a segmentation model in this article. According to the results, it can be seen that the domain fine-tuning model and a better-performed subsentence fine-tuning model are better than the wait-k series model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Development results", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Monotonic infinite lookback attention for simultaneous machine translation", "authors": [ { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.05218" ] }, "num": null, "urls": [], "raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simul- taneous machine translation. arXiv preprint arXiv:1906.05218.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Retranslation versus streaming for simultaneous translation", "authors": [ { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.03643" ] }, "num": null, "urls": [], "raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, and George Foster. 2020a. Re- translation versus streaming for simultaneous translation. arXiv preprint arXiv:2004.03643.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Re-translation strategies for long form, simultaneous, spoken language translation", "authors": [ { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Te", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Pallavi", "middle": [], "last": "Baljekar", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7919--7923", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naveen Arivazhagan, Colin Cherry, Isabelle Te, Wolfgang Macherey, Pallavi Baljekar, and George Foster. 2020b. Re-translation strategies for long form, simultaneous, spoken language translation. In ICASSP 2020-2020 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7919-7923. IEEE.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language un- derstanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A simple, fast, and effective reparameterization of IBM model 2", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Chahuneau", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "644--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Pro- ceedings of the 2013 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 644-648, Atlanta, Georgia. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Stacl: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework", "authors": [ { "first": "Mingbo", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Renjie", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Kaibo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Baigong", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hairong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.08398" ] }, "num": null, "urls": [], "raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, et al. 2018. Stacl: Simultaneous translation with implicit anticipation and controllable latency us- ing prefix-to-prefix framework. arXiv preprint arXiv:1810.08398.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Dynamic transcription for lowlatency speech translation", "authors": [ { "first": "Jan", "middle": [], "last": "Niehues", "suffix": "" }, { "first": "Eunah", "middle": [], "last": "Thai Son Nguyen", "suffix": "" }, { "first": "Thanh-Le", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Ha", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kilgour", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Sperber", "suffix": "" }, { "first": "Alex", "middle": [], "last": "St\u00fcker", "suffix": "" }, { "first": "", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2016, "venue": "Interspeech", "volume": "", "issue": "", "pages": "2513--2517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Niehues, Thai Son Nguyen, Eunah Cho, Thanh-Le Ha, Kevin Kilgour, Markus M\u00fcller, Matthias Sperber, Sebastian St\u00fcker, and Alex Waibel. 2016. Dynamic transcription for low- latency speech translation. In Interspeech, pages 2513-2517.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Segmentation strategies for streaming speech translation", "authors": [ { "first": "Vivek", "middle": [], "last": "Kumar Rangarajan", "suffix": "" }, { "first": "John", "middle": [], "last": "Sridhar", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Rathinavelu", "middle": [], "last": "Ljolje", "suffix": "" }, { "first": "", "middle": [], "last": "Chengalvarayan", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "230--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathi- navelu Chengalvarayan. 2013. Segmentation strategies for streaming speech translation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 230-238, Atlanta, Georgia. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.03762" ] }, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Dutongchuan: Context-aware translation model for simultaneous interpreting", "authors": [ { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.12984" ] }, "num": null, "urls": [], "raw_text": "Hao Xiong, Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Dutongchuan: Context-aware transla- tion model for simultaneous interpreting. arXiv preprint arXiv:1907.12984.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bstc: A largescale chinese-english speech translation dataset", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiyang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qinfei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.03575" ] }, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Xiyang Wang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Zhi Li, Haifeng Wang, Ying Chen, and Qinfei Li. 2021. Bstc: A large- scale chinese-english speech translation dataset. arXiv preprint arXiv:2104.03575.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning adaptive segmentation policy for simultaneous translation", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2280--2289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2020. Learning adaptive segmentation policy for simultaneous translation. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 2280-2289.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Segment sentence by word alignment matrix.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "num": null, "text": "Segment example, first sub-sentence is in red and the second one is in black.", "content": "" } } } }