{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:34.302262Z" }, "title": "End-to-End Simultaneous Speech Translation with Pretraining and Distillation: Huawei Noah's System for AutoSimTranS 2022", "authors": [ { "first": "Xingshan", "middle": [], "last": "Zeng", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "zeng.xingshan@huawei.com" }, { "first": "Pengfei", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "lipengfei111@huawei.com" }, { "first": "Liangyou", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "liliangyou@huawei.com" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "qun.liu@huawei.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the system submitted to AutoSimTrans 2022 from Huawei Noah's Ark Lab, which won the first place in the audio input track of the Chinese-English translation task. Our system is based on RealTranS, an end-to-end simultaneous speech translation model. We enhance the model with pretraining, by initializing the acoustic encoder with ASR encoder, and the semantic encoder and decoder with NMT encoder and decoder, respectively. To relieve the data scarcity, we further construct pseudo training corpus as a kind of knowledge distillation with ASR data and the pretrained NMT model. Meanwhile, we also apply several techniques to improve the robustness and domain generalizability, including punctuation removal, tokenlevel knowledge distillation and multi-domain finetuning. Experiments show that our system significantly outperforms the baselines at all latency and also verify the effectiveness of our proposed methods.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the system submitted to AutoSimTrans 2022 from Huawei Noah's Ark Lab, which won the first place in the audio input track of the Chinese-English translation task. Our system is based on RealTranS, an end-to-end simultaneous speech translation model. We enhance the model with pretraining, by initializing the acoustic encoder with ASR encoder, and the semantic encoder and decoder with NMT encoder and decoder, respectively. To relieve the data scarcity, we further construct pseudo training corpus as a kind of knowledge distillation with ASR data and the pretrained NMT model. Meanwhile, we also apply several techniques to improve the robustness and domain generalizability, including punctuation removal, tokenlevel knowledge distillation and multi-domain finetuning. Experiments show that our system significantly outperforms the baselines at all latency and also verify the effectiveness of our proposed methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Simultaneous Speech Translation (ST) task (F\u00fcgen et al., 2007; Oda et al., 2014) aims to translate speech into the corresponding text in another language while reading the source speech. Prior works mainly focus on the cascaded solution, i.e., first recognize the speech with a streaming ASR model and then translate into the target language with simultaneous NMT (Ma et al., 2019) model. Such cascaded systems can leverage off-the-shelf ASR and NMT systems, which have large-scale data for training.", "cite_spans": [ { "start": 42, "end": 62, "text": "(F\u00fcgen et al., 2007;", "ref_id": "BIBREF2" }, { "start": 63, "end": 80, "text": "Oda et al., 2014)", "ref_id": "BIBREF16" }, { "start": 364, "end": 381, "text": "(Ma et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, end-to-end simultaneous ST models are also proposed (Ren et al., 2020; Zeng et al., 2021) and have shown promising improvements towards cascaded models when experimented on the same amount of data, especially in low latency requirement. End-to-end models are believed to have the advantages of lower latency, smaller model size and less error propagation (Weiss et al., 2017) , but suffer from data scarcity. A well-trained end-toend model typically needs a large amount of training data. To alleviate the data scarcity problem, pretraining (Xu et al., 2021; and data augmentation (Bahar et al., 2019; Jia et al., 2019) are two main techniques. We examine the effectiveness of the two techniques for improving end-to-end models in this work.", "cite_spans": [ { "start": 62, "end": 80, "text": "(Ren et al., 2020;", "ref_id": "BIBREF19" }, { "start": 81, "end": 99, "text": "Zeng et al., 2021)", "ref_id": "BIBREF22" }, { "start": 365, "end": 385, "text": "(Weiss et al., 2017)", "ref_id": "BIBREF20" }, { "start": 551, "end": 568, "text": "(Xu et al., 2021;", "ref_id": "BIBREF21" }, { "start": 591, "end": 611, "text": "(Bahar et al., 2019;", "ref_id": "BIBREF0" }, { "start": 612, "end": 629, "text": "Jia et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, our end-to-end ST model follows RealTranS (Zeng et al., 2021) , an encoder-decoder model and the encoder is decoupled into acoustic encoder and semantic encoder. The acoustic encoder is used to extract acoustic features which has a similar function as the ASR encoder. Therefore we initialize it with a pretrained ASR encoder. The semantic encoder is required to learn semantic knowledge, which benefits the translation task, so we initialize it with a pretrained NMT encoder. The decoder is also initialized with a pretrained NMT decoder to produce target text decoding. For data augmentation, we construct pseudo ST corpus based on ASR data and the pretrained NMT model. The ground-truth transcription is translated into target language texts, and so speech-transcriptiontranslation triplets for ST training are built. This is also known as sequence-level knowledge distillation (Kim and Rush, 2016) . Generally, the NMT data can also be augmented with a TTS model to generate pseudo speech. However, the data quality highly depends on the TTS performance and it is hard for TTS to produce voices similar to those in real scenarios. Thus we do not utilize this method and leave it to the future work. Another popular technique for audio data augmentation is SpecAugment (Park et al., 2019; Bahar et al., 2019) , which randomly masks a block of consecutive time steps and/or mel frequency channels of the input speech features during training. It is a simple and lowimplementation cost method and has been shown effective in avoiding overfitting and improving ro- bustness. We apply it to all audio-related model training. The training procedure for our ST model mainly contains three steps: ASR and NMT pretraining, large-scale training on the constructed pseudo data, and finetuning on the in-domain data. During training, we remove all the punctuation in source text in audio-related training (i.e., excluding NMT pretraining) to relieve the learning burden and improve recognition quality. To enhance the final performance after finetuning, we also utilize token-level knowledge distillation from the full-sentence NMT model and multi-domain finetuning trick.", "cite_spans": [ { "start": 56, "end": 75, "text": "(Zeng et al., 2021)", "ref_id": "BIBREF22" }, { "start": 895, "end": 915, "text": "(Kim and Rush, 2016)", "ref_id": "BIBREF7" }, { "start": 1286, "end": 1305, "text": "(Park et al., 2019;", "ref_id": "BIBREF17" }, { "start": 1306, "end": 1325, "text": "Bahar et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our model is used to participate in the audio input track of the AutoSimTrans 2022 Chinese-English translation task. In this track, an in-domain ST data called BSTC ) (contains about 70 hours of audios) is provided, which is very limited. Therefore, assisted with extra ASR and NMT data, we use the aforementioned techniques to achieve remarkable improvement at all latency requirement, which results in winning the first place of the track. We also conduct more experiments to examine the effectiveness of our used techniques. The experiments show that all of our used methods contribute to the improvement of the final model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We build our model based on RealTranS (Zeng et al., 2021) , an end-to-end simultaneous speech translation model with its encoder decoupled into acoustic encoder and semantic encoder (see Figure 1) . With a CTC module guiding the acoustic encoder to produce acoustic-level features, the decoupling relieves the burden of the ST encoder and makes the two separate modules focus on different knowledge, which benefits the model training.", "cite_spans": [ { "start": 38, "end": 57, "text": "(Zeng et al., 2021)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 187, "end": 196, "text": "Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model Description", "sec_num": "2" }, { "text": "RealTranS leverages the unidirectional Conv-Transformer (Huang et al., 2020) as the acoustic encoder for gradual downsampling, and weightedshrinking for bridging the modality gap between speech and text. With weighted-shrinking, long speech features are shrunk to similar lengths as their corresponding transcription, which makes the input of the semantic encoder more similar to the input of NMT encoder. In this way, the difficulty of knowledge transferring when we initialize the semantic encoder with NMT encoder becomes smaller. Apart from the semantic encoder, we also initialize the acoustic encoder with pretrained ASR encoder, and the decoder with pretrained NMT decoder, which has been shown very useful in boosting the performance (Xu et al., 2021) .", "cite_spans": [ { "start": 56, "end": 76, "text": "(Huang et al., 2020)", "ref_id": "BIBREF5" }, { "start": 742, "end": 759, "text": "(Xu et al., 2021)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "2" }, { "text": "For simultaneous policy, we use the wait-kstride-n policy (Zeng et al., 2021) , which has shown promising improvement over the conventional wait-k policy (Ma et al., 2019) .", "cite_spans": [ { "start": 58, "end": 77, "text": "(Zeng et al., 2021)", "ref_id": "BIBREF22" }, { "start": 154, "end": 171, "text": "(Ma et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "2" }, { "text": "Our model training consists of three steps: ASR and NMT pretraining, large-scale training on the constructed pseudo data, and finetuning on the indomain data. Each step may contain different techniques to enhance model performance and we will describe them in-detailed as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3" }, { "text": "We first describe how we pretrain our ASR and NMT models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "ASR Pretraining. Our ASR model follows the architecture of Conv-Transformer Transducer proposed by Huang et al. (2020) . A Transducer model contains an audio encoder, a prediction net and a joint net, where the audio encoder is used for initializing the acoustic encoder of our ST model and the rest discarded. For each frame in input speech features, the model first predicts either a token label from the vocabulary or a special blank symbol. When a label is predicted, the model continues to predict the next output; when the model predicts a blank symbol, it proceeds to the next frame indicating no more labels can be predicted with current frames. Therefore, for each input speech x, the model will give T x + T z predictions, where T x (the length of x) is the number of blank symbols and T z is the number of token labels representing the output transcription z. A Transducer model computes the following marginalized distribution and maximizes it during training:", "cite_spans": [ { "start": 99, "end": 118, "text": "Huang et al. (2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "p(z|x) = \u1e91\u2208A(x,z) Tx+Tz i=1 p(\u1e91i|x1, ..., xt i , z0, ..., zu i\u22121 ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "where A(x, z) is the set containing all valid alignment paths such that removing the blank symbols in\u1e91 yields z. The summation of probabilities of all alignment paths is computed efficiently with forward-backward algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "As there is no ASR data provided, we collect large-scale ASR datasets from both publicly available websites and our internal system (the statistics of the datasets are in Table 1 ) for training. During training, we also add additive Gaussian noise and apply speed perturbation (Ko et al., 2015) and SpecAugment (Park et al., 2019) for data augmentation and model robustness.", "cite_spans": [ { "start": 277, "end": 294, "text": "(Ko et al., 2015)", "ref_id": "BIBREF9" }, { "start": 311, "end": 330, "text": "(Park et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 171, "end": 178, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "Finally, our pretrained ASR model gets the performance of 11.35% WER (Word Error Rate) in BSTC development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "NMT Pretraining. We pretrain our NMT model with CeMAT (Li et al., 2022) , a sequence-tosequence pretraining model but with a bidirectional decoder, which has been shown to be effective in NMT tasks. CeMAT can be pretrained on largescale bilingual and monolingual corpus. As no additional text data are available, we only use the dynamic dual-masking algorithm to improve performance. Given an input source sentence z, we first sample a masking ratio \u00b5 from a uniform distribution between [0.1, 0.2], then randomly mask a subset of source words according to \u00b5. For the corresponding target sentence y, we also use a uniform distribution between [0.2, 0.5] to sample a masking ratio \u03c5. Following CeMAT, we set \u03c5 \u2265 \u00b5 to force the bidirectional decoder to obtain more information from the encoder. For monolingual, we create pseudo bilignual text by copying the sentence, then sample \u03c5 = \u00b5 from a uniform distribution between [0.3, 0.4] and mask the same subset on both sides. After dual-masking, we get the new sentence pair (\u1e91,\u0177), which will be used for jointly training the encoder and decoder by predicting masked tokens on both sides. The final training objective is formulated as follows:", "cite_spans": [ { "start": 54, "end": 71, "text": "(Li et al., 2022)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "L = \u2212 (\u1e91,\u0177) \u03bb y j \u2208y mask log P (yj|\u1e91,\u0177) +(1 \u2212 \u03bb) z i \u2208z mask log P (zi|\u1e91) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "where y mask are the set of masked target words, z mask are the set of masked source words, and \u03bb is a hyper-parameter to balance the influence of both sides. Following CeMAT, we set \u03bb = 0.7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "Our NMT pretraining procedure can be summarized as three sub-steps. We first train a basic NMT model using the provided general-domain bilingual data (see Table 1 ), and generate pseudo target sentences based on the source text from the ASR data used in ASR pretraining. To improve the quality of the pseudo corpus, we use HintedBT (Ramnath et al., 2021) to score each generated sentences. Next, we combine the bilingual data, the pseudo corpus and the monolingual text (from the used ASR data) to pretrain CeMAT. Finally, we finetune it on the bilingual and pseudo corpus including the in-domain data (i.e. text part in BSTC dataset) to produce our final NMT model. The encoder and decoder of the NMT model is used to initialize the semantic encoder and decoder of our ST model, respectively. It is also used to generate pseudo ST data in next subsection.", "cite_spans": [ { "start": 332, "end": 354, "text": "(Ramnath et al., 2021)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "Our NMT model achieves BLEU score of 21.82 in BSTC development set, and also won the second place in the streaming transcription input track of the Chinese-English translation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining", "sec_num": "3.1" }, { "text": "As the provided ST data is limited (about 70 hours annotated data), it is difficult to directly train an end-to-end model only with the provided data. We decide to construct pseudo data from our used ASR data -we translate the Chinese transcription into English translation with our pretrained NMT model so that we get a large-scale pseudo ST corpus with audio-transcription-translation triplets. In this way, we can leverage large-scale unannotated audios and distill knowledge from the NMT model. We remove all the punctuation in transcription (as the ASR data comes from different domains, some of them contain punctuation but some not) to make it consistent during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training on Pseudo Data (Distillation)", "sec_num": "3.2" }, { "text": "We first train our model (initialized with the pretrained modules described in Section 3.1) on the pseudo data with multi-path wait-k training (Elbayad et al., 2020) to cover all possible k values. Specifically, k value will be uniformly sampled from K = [1, ..., |K|] for each training sample during training while we keep the n value in waitk-stride-n policy at 2. In this way, the model can learn knowledge for different latency requirements. The training objectives follows RealTranS and contain the CTC loss (L CT C ) (Graves et al., 2006 ) with a blank penalty (L BP ) (Zeng et al., 2021) . We omit their equations here and refer the readers to Zeng et al. (2021) for details. The translation loss are defined as follows:", "cite_spans": [ { "start": 523, "end": 543, "text": "(Graves et al., 2006", "ref_id": "BIBREF3" }, { "start": 575, "end": 594, "text": "(Zeng et al., 2021)", "ref_id": "BIBREF22" }, { "start": 651, "end": 669, "text": "Zeng et al. (2021)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Training on Pseudo Data (Distillation)", "sec_num": "3.2" }, { "text": "LST = \u2212 (x,y)\u2208D,k\u223cU (K) Ty t=1 p(yt|y