{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:39.013773Z" }, "title": "System Description on Third Automatic Simultaneous Translation Workshop", "authors": [ { "first": "Zhang", "middle": [], "last": "Yiqiao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Huazhong Agricultural University", "location": { "postCode": "430070", "settlement": "Wuhan", "country": "China" } }, "email": "qiaoyizhang@webmail.hzau.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper shows my submission to the Third Automatic Simultaneous Translation Workshop at NAACL2022. The submission includes Chinese audio to English text task, Chinese text to English text tast, and English text to Spanish text task. For the two text-to-text tasks, I use the STACL model of PaddleNLP. As for the audioto-text task, I first use DeepSpeech2 to translate the audio into text, then apply the STACL model to handle the text-to-text task. The submission results show that the used method can get low delay with a few training samples.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper shows my submission to the Third Automatic Simultaneous Translation Workshop at NAACL2022. The submission includes Chinese audio to English text task, Chinese text to English text tast, and English text to Spanish text task. For the two text-to-text tasks, I use the STACL model of PaddleNLP. As for the audioto-text task, I first use DeepSpeech2 to translate the audio into text, then apply the STACL model to handle the text-to-text task. The submission results show that the used method can get low delay with a few training samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The submitted system consists of two parts. One is audio to text system, which can translate Chinese audio into English text. The second part is the textto-text model, which can translate source text into the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the text-to-text translation task, the used system is STACL model (Ma et al., 2019) . All training data are processed by Byte Pair Encoding (Sennrich et al., 2016) . In addition, the strategies used by the model in training and inference are the same. For example, if the wait-k strategy in inference is 1, the wait-k in training is also 1.", "cite_spans": [ { "start": 69, "end": 86, "text": "(Ma et al., 2019)", "ref_id": "BIBREF3" }, { "start": 143, "end": 166, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the audio to text translation task, the Deep-Speech2 model (Amodei et al., 2015) is used as the preprocessing of the STACL model. The Deep-Speech2 model can translate audio (Chinese) segments into text (Chinese) segments and then input the segments into the STACL model to generate the target-language text.", "cite_spans": [ { "start": 62, "end": 83, "text": "(Amodei et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The submitted results show that the used STACL model has a low delay for text translation tasks. But the system can only generate the results with a high delay in the audio translation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. Section 2 describes the training data used in the submitted system. Section 3 describes the model, training strategy, and results. The conclusions are given in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, I describe the Datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2" }, { "text": "The dataset used for the Chinese-to-English(Zh-En) translation task is extracted from AST, which is provided by the NAACL workshop. This data set contains 214 JSON files, and each JSON file contains parallel Chinese vs. English corpus. The data, which is extracted from these JSON files, contains 37,901 Chinese vs. English samples. After byte pair encoding, the samples are used to train the Zh-En translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Zh-En Text Translation Dataset", "sec_num": "2.1" }, { "text": "The BPE vocabulary of the Zh-En translation task can be found in the Github project of Pad-dleNLP (Contributors, 2021).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Zh-En Text Translation Dataset", "sec_num": "2.1" }, { "text": "The dataset used for the English-to-Spanish(En-Es) text translation task was obtained from the United Nations Parallel Corpus(Ziemski et al., 2016). The En-Es dataset contains 21,911,121 samples. After byte pair encoding, the dataset is used to train the En-Es text translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "En-Es Text Translation Dataset", "sec_num": "2.2" }, { "text": "For obtaining the BPE vocabulary, I segment the source dataset into subword units by Subword Neural Machine Translation (Sennrich et al., 2015) . The code for segmentation can be found in (Sennrich, 2021).", "cite_spans": [ { "start": 120, "end": 143, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "En-Es Text Translation Dataset", "sec_num": "2.2" }, { "text": "The training data of the Chinese speech recognition model is AISHELL (Bu et al., 2017) , which is an open-source Mandarin speech corpus. In the submitted system, I only use the pre-trained model of the DeepSpeech2 model on AISHELL. This section shows the models used in the submitted system and discusses the results.", "cite_spans": [ { "start": 69, "end": 86, "text": "(Bu et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Audio-to-Test Dataset", "sec_num": "2.3" }, { "text": "In the text translation task, the model is STACL (Ma et al., 2019) , which is a translation architecture for all simultaneous interpreting scenarios. For train the model, the wait-k strategy is adopted. The model will wait for k words of the source text and then start to translate. For example, when k is 2, the model only starts translating the first word of the target language after obtaining the second word of the source text.", "cite_spans": [ { "start": 49, "end": 66, "text": "(Ma et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "STACL model", "sec_num": "3.1.1" }, { "text": "In the inference process, the model decodes one word at a time. When the sentences to be translated are all read, the untranslated sentences will be completed at once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STACL model", "sec_num": "3.1.1" }, { "text": "In the Zh-En translation task, I trained the model with wait-k = 1 and wait-k = 3. The details of training parameters are shown in table 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results in Zh-En task", "sec_num": "3.1.2" }, { "text": "When the wait-k is 1, the AL of the submitted result is -1.28, and the BLEU is 14.86. When the wait-k is 3, the AL is -0.52, and the BLUE is 14.84. The two results have almost the same accuracy, demonstrating that the used dataset may not be sufficient for the translation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results in Zh-En task", "sec_num": "3.1.2" }, { "text": "In the En-Es translation task, the max epoch is set as 1, and other parameters are the same as table 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results in En-Es task", "sec_num": "3.1.3" }, { "text": "The AL of the submitted result is -1.61, and the BLEU is 11.82.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results in En-Es task", "sec_num": "3.1.3" }, { "text": "Deepspeech2 is an end-to-end automatic speech recognition system based on the PaddlePaddle deep Figure 1 : Frame for audio translation system learning framework (Amodei et al., 2015) . In order to translate the speech data into the corresponding target-language text, I first segment the audio, use deepspeech2 to covert the voice segment into text, and then translate the recognized text into the target language through the STACL model. Figure 1 shows the workflow of speech recognition translation.", "cite_spans": [ { "start": 161, "end": 182, "text": "(Amodei et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 1", "ref_id": null }, { "start": 439, "end": 447, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "DeepSpeech2 model", "sec_num": "3.2.1" }, { "text": "Since each segment contains multiple Chinese characters, decoding only one character at a time will lead to excessive delay (CW value). To overcome this issue, I decoded two characters at once. The CW of submitted results is 19.21, and the BLEU is 7.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.2.2" }, { "text": "This paper describes my submitted system at the Third Automatic Simultaneous Translation Workshop. The system submitted has a low delay. I will conduct a further study about the speech recognition strategy in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Kemal Oflazer, ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Micha\u0142 Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "authors": [ { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Rishita", "middle": [], "last": "Anubhai", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Battenberg", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Case", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Jingdong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Chrzanowski", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Coates", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Diamos", "suffix": "" }, { "first": "Erich", "middle": [], "last": "Elsen", "suffix": "" }, { "first": "Jesse", "middle": [ "H" ], "last": "Engel", "suffix": "" }, { "first": "Linxi", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Fougner", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Han", "suffix": "" }, { "first": "Awni", "middle": [ "Y" ], "last": "Hannun", "suffix": "" }, { "first": "Billy", "middle": [], "last": "Jun", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Legresley", "suffix": "" }, { "first": "Libby", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Prenger", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Raiman", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "David", "middle": [], "last": "Seetapun", "suffix": "" }, { "first": "Shubho", "middle": [], "last": "Sengupta", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhiqian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Di- amos, Erich Elsen, Jesse H. Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seeta- pun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, and Zhenyao Zhu. 2015. Deep speech 2: End-to-end speech recognition in english and mandarin. CoRR, abs/1512.02595.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "AISHELL-1: an open-source mandarin speech corpus and A speech recognition baseline", "authors": [ { "first": "Hui", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Jiayu", "middle": [], "last": "Du", "suffix": "" }, { "first": "Xingyu", "middle": [], "last": "Na", "suffix": "" }, { "first": "Bengu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng. 2017. AISHELL-1: an open-source mandarin speech corpus and A speech recognition baseline. CoRR, abs/1709.05522.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Paddlenlp: An easy-touse and high performance nlp library", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "PaddleNLP Contributors. 2021. Paddlenlp: An easy-to- use and high performance nlp library. https:// github.com/PaddlePaddle/PaddleNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework", "authors": [ { "first": "Mingbo", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Renjie", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Kaibo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Baigong", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hairong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3025--3036", "other_ids": { "DOI": [ "10.18653/v1/P19-1289" ] }, "num": null, "urls": [], "raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous trans- lation with implicit anticipation and controllable la- tency using prefix-to-prefix framework. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Subword neural machine translation", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich. 2021. Subword neural machine trans- lation. https://github.com/rsennrich/ subword-nmt.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Lin- guistics.", "links": null } }, "ref_entries": {} } }