{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:42.033723Z" }, "title": "USST's System for AutoSimTrans 2022", "authors": [ { "first": "Jiahui", "middle": [], "last": "Zhu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai University of Science and Technology", "location": {} }, "email": "zhujiahui.cn@outlook.com" }, { "first": "Jun", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "East China University of Science and Technology Shanghai", "location": { "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our submitted text-to-text Simultaneous translation (ST) system, which won the second place in the Chinese\u2192English streaming translation task of AutoSimTrans 2022. Our baseline system is a BPE-based Transformer model trained with the PaddlePaddle framework. In our experiments, we employ data synthesis and ensemble approaches to enhance the base model. In order to bridge the gap between general domain and spoken domain, we select in-domain data from a general corpus and mix them with a spoken corpus for mixed fine-tuning. Finally, we adopt a fixed wait-k policy to transfer our full-sentence translation model to simultaneous translation model. Experiments on the development data show that our system outperforms the baseline system.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our submitted text-to-text Simultaneous translation (ST) system, which won the second place in the Chinese\u2192English streaming translation task of AutoSimTrans 2022. Our baseline system is a BPE-based Transformer model trained with the PaddlePaddle framework. In our experiments, we employ data synthesis and ensemble approaches to enhance the base model. In order to bridge the gap between general domain and spoken domain, we select in-domain data from a general corpus and mix them with a spoken corpus for mixed fine-tuning. Finally, we adopt a fixed wait-k policy to transfer our full-sentence translation model to simultaneous translation model. Experiments on the development data show that our system outperforms the baseline system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Simultaneous translation (Gu et al., 2017; Ma et al., 2018) consists in generating a translation before the source speaker finishes speaking. It is widely used in many real-time scenarios such as international conferences, business negotiations and legal proceedings. The challenge of Simultaneous machine translation is to find a read-write policy that balances translation quality and latency. The translation quality will decline if the machine translation system reads insufficient source information. When reading wider source text, latency will increase.", "cite_spans": [ { "start": 25, "end": 42, "text": "(Gu et al., 2017;", "ref_id": "BIBREF6" }, { "start": 43, "end": 59, "text": "Ma et al., 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent read-write policies can be divided into two categories: fixed policies such as wait-k (Ma et al., 2018) , wait-if* (Cho and Esipova, 2016) , and adaptive policies such as MoChA (Chiu and Raffel, 2017) , MILk (Arivazhagan et al., 2019) and MU (Zhang et al., 2020) . Fixed policies are simple to implement, but they neglect contextual information, which might result in quality reduction. Dynamic policies are more flexible, they can learn from data to achieve better quality/latency trade-offs, but accordingly difficult to train.", "cite_spans": [ { "start": 93, "end": 110, "text": "(Ma et al., 2018)", "ref_id": "BIBREF10" }, { "start": 122, "end": 145, "text": "(Cho and Esipova, 2016)", "ref_id": "BIBREF3" }, { "start": 184, "end": 207, "text": "(Chiu and Raffel, 2017)", "ref_id": null }, { "start": 215, "end": 241, "text": "(Arivazhagan et al., 2019)", "ref_id": "BIBREF0" }, { "start": 249, "end": 269, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our system, we train a Transformer (Vaswani et al., 2017) with a deep encoder (Meng et al., 2020) as baseline for abtaining rich source representations, besides we initialize the model with the method mentioned in DeepNet (Wang et al., 2022) in order to stabilize the training of the deeper model. At the pre-training stage, we firstly pretrain our model on a large general corpus, then we utilize data synthesis methods such as self-training and back-translation to improve model quality.", "cite_spans": [ { "start": 38, "end": 60, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF19" }, { "start": 81, "end": 100, "text": "(Meng et al., 2020)", "ref_id": "BIBREF11" }, { "start": 225, "end": 244, "text": "(Wang et al., 2022)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "During the fine-tuning phase, we first apply finetuning on a small spoken corpus. For better domain adaptation, we adopt mixed fine-tuning (Chu et al., 2017) , which trains on a mixed dataset that includes a subsampled general corpus and an upsampled spoken corpus. Thirdly, we propose a method called \"in-domain mixed fine-tuning\", which further improve the BLEU score than mixed finetuning. Specifically, inspired by in-domain data filtering (Moore and Lewis, 2010; Ng et al., 2019), we mixed upsampled spoken data with selected in-domain data from general corpus rather than random subsampled.", "cite_spans": [ { "start": 139, "end": 157, "text": "(Chu et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the final stage, we employ the wait-k policy to convert the full-sentence translation model into a prefix-to-prefix architecture that predicts target words with only the source sentence's prefixes. After waiting for k-1 source subwords, the system reads a source subword and then predicts a target subword alternately until is detected. An example of wait 1 is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 379, "end": 387, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The contributions of this paper are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a domain adaption approach called \"in-domain mixed fine-tuning\", which empirically proved to be better than finetuning while mitigating overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 All our code has been open sourced, see USST 1 . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We participate in the Chinese-English streaming transcription track , where each sentence is broken into lines whose length is incremented by one word until the sentence is completed. An example is shown in Table 1 . For pre-training, we use the CWMT21 parallel corpus (9.1M) 2 , and we fine-tune the pretrained model using transcription and translation of the BSTC (Baidu Speech Translation Corpus,37K) (Zhang et al., 2021) , shown in Table 2 . We also use CWMT's 10M Chinese monolingual data for synthetic data generation.", "cite_spans": [ { "start": 404, "end": 424, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 207, "end": 214, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 436, "end": 443, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Similar to (Ng et al., 2019; Meng et al., 2020) , we preprocess the data as follows:", "cite_spans": [ { "start": 11, "end": 28, "text": "(Ng et al., 2019;", "ref_id": "BIBREF13" }, { "start": 29, "end": 47, "text": "Meng et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "\u2022 Word Segmentation: For Chinese, we use the open-source Chinese word segmentation tool jieba 3 for word segmentation. For English, we adopt punctuation-normalization, tokenization and truecasing with Moses scripts 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "\u2022 Length filter: We remove sentences that are longer than 250 words and sentence pairs with a source/target length ratio exceeding 2.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "\u2022 Langage identification (langid) (Lui and Baldwin, 2012) : We use fastText 5 for language identification filtering, which removes sentence pairs that are not predicted as the correct language on either side.", "cite_spans": [ { "start": 34, "end": 57, "text": "(Lui and Baldwin, 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "\u2022 Deduplication: Remove duplicate sentences in Chinese monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "\u2022 Byte-pair-encoding (BPE) (Sennrich et al., 2016) 6 : For both the Chinese and English sides, we use BPE with 32K operations.", "cite_spans": [ { "start": 27, "end": 52, "text": "(Sennrich et al., 2016) 6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "See Table 3 for details on the filtered data size.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Domain Train size Dev size CWMT21 General 9,023,708 1011 BSTC Spoken 37,901 956 3 System Overview", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": null }, { "text": "As shown in previous work Sun et al., 2019; Meng et al., 2020) , increasing the depth of the Transformer encoder can substantially improve model performance, therefore we train the Transformer with deep encoder to obtain a better source representation. In addition, in order to have both the high performance of post-norm and the stable training of pre-norm (Nguyen and Salazar, 2019), we use the methods mentioned in DeepNet (Wang et al., 2022) , including a normalization function deepnorm that modifies the residual connection and a theoretically derived initialization. Our model configurations are shown in Table 4 . For training the full-sentence translation model, given the source sentence x, the probability of predicting the target sentence y is as shown in Eq. 1, and the training objective is to minimize the negative log-likelihood as shown in Eq. 2.", "cite_spans": [ { "start": 26, "end": 43, "text": "Sun et al., 2019;", "ref_id": "BIBREF18" }, { "start": 44, "end": 62, "text": "Meng et al., 2020)", "ref_id": "BIBREF11" }, { "start": 426, "end": 445, "text": "(Wang et al., 2022)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 612, "end": 619, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Baseline System", "sec_num": "3.1" }, { "text": "p(y|x) = |y| t=1 p(y t |x, y ", "num": null }, "TABREF2": { "html": null, "text": "Statistics of Chinese\u2192English parallel corpus.", "type_str": "table", "content": "
Zh-En Zh Mono
no filter9.1M10M
+length filter8.9M10M
+langid filter8.8M10M
+deduplication-6.8M
", "num": null }, "TABREF3": { "html": null, "text": "", "type_str": "table", "content": "", "num": null }, "TABREF5": { "html": null, "text": "", "type_str": "table", "content": "
", "num": null }, "TABREF7": { "html": null, "text": "BLEU and Average Lagging on BSTC dev set.", "type_str": "table", "content": "
", "num": null }, "TABREF10": { "html": null, "text": "", "type_str": "table", "content": "
", "num": null } } } }