ACL-OCL / Base_JSON /prefixA /json /autosimtrans /2020.autosimtrans-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
69.2 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:09:42.656379Z"
},
"title": "BIT's system for the AutoSimTrans 2020",
"authors": [
{
"first": "Minqin",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "lmqminqinli@163.com"
},
{
"first": "Haodong",
"middle": [],
"last": "Cheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yuanjie",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Sijia",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Liting",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yuhang",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Institute of Technology",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "guoyuhang@bit.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our machine translation systems for the streaming Chinese-to-English translation task of AutoSimTrans 2020. We present a sentence length based method and a sentence boundary detection model based method for the streaming input segmentation. Experimental results of the transcription and the ASR output translation on the development data sets show that the translation system with the detection model based method outperforms the one with the length based method in BLEU score by 1.19 and 0.99 respectively under similar or better latency.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our machine translation systems for the streaming Chinese-to-English translation task of AutoSimTrans 2020. We present a sentence length based method and a sentence boundary detection model based method for the streaming input segmentation. Experimental results of the transcription and the ASR output translation on the development data sets show that the translation system with the detection model based method outperforms the one with the length based method in BLEU score by 1.19 and 0.99 respectively under similar or better latency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic simultaneous machine translation is a useful technique in many speech translation scenarios. Compared with traditional machine translations, simultaneous translation focuses on processing streaming inputs of spoken language and achieving low latency translations. Two challenges have to be faced in this task. On one hand, few parallel corpora in spoken language domain are open available, which leads to the fact that the translation performance is not as good as in general domain. On the other hand, traditional machine translation takes a full sentence as input so that the latency of the translation is relatively long.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To deal with the shortage of the spoken language corpora, we pre-train a machine translation model on general domain corpus and then fine-tune this model with limited spoken language corpora. We also augment the spoken language corpora with different strategies to increase the in-domain corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to reduce the translation latency, we use three sentence segmentation methods: a punctuation based method, a length based method and a sentence boundary detection model based method. All of the methods can split the input source sentence into short pieces, which makes the translation model obtain low latency translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the streaming automatic speech recognition(ASR) output track for the Chineseto-English translation task of AutoSimTrans 2020, most of our proposed systems outperform the baseline systems in BLEU score and the sentence boundary detection model based sentence segmentation method abstains higher BLEU score than the length based method under similar latency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in the streaming Chineseto-English translation task of AutoSimTrans 2020 1 : the streaming ASR output translation track and the streaming transcription translation track. The two tracks are similar except that the ASR output may contain error results and includes no internal punctuation but end punctuation. Table 1 shows an example of the streaming ASR output translation.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "Our all systems can be divided into 3 parts: data preprocessing, sentence segmentation and translation. Data preprocessing includes data cleaning, data augmentation. We implement 3 sentence segmentation methods, which are based on punctuation, sentence length and a sentence boundary detection model. The training of translation model includes pretraining out of domain and fine-tuning in domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches",
"sec_num": "3"
},
{
"text": "Hello everyone. Welcome everyone to come , here. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Streaming ASR output Translation",
"sec_num": null
},
{
"text": "Noises in large-scale parallel corpus are almost inevitable. We clean the parallel corpus for the training. Here we mainly focus on the missaligned errors in the training corpus. We find that in the CWMT19 zh-en data set, some of the target sentences are not in English, but in Chinese, Japanese, French or some other noisy form. We suspect these small noises may affect the training of the model. Inspired by B\u00e9rard et al. (2019) , we apply a language detection script, langid.py 2 , to the source and the target sentence of the CWMT19 data set separately. Sentence pairs which are not matched with their expected languages are deleted. The corpus are then cleaned by the tensor2tensor 3 module by default. Eventually the CWMT19 corpus are then filtered from 9,023,708 pairs into 7,227,510 pairs after data cleaning.",
"cite_spans": [
{
"start": 410,
"end": 430,
"text": "B\u00e9rard et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Cleaning",
"sec_num": "3.1"
},
{
"text": "Insufficiency of training data is common in spoken language translation, and many data augmentation methods are used to alleviate this problem . In the streaming ASR output translation system, we use the homophone substitution method to augment the training data according to the characteristics of ASR output translation. The results of ASR usually contain errors of homophonic substitution. We randomly replace each character in the source language part of the training corpus with probability p with its homophones to improve the generalization ability of the system. As shown in Table 2 , we find characters that are homophonic with the selected characters, sample them according to the probability that these characters appear in the corpus, and substitute them to the corresponding positions. The data augmentation is only used in our MT model's training because of the insufficiency of training data in spoken language domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 583,
"end": 590,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.2"
},
{
"text": "Similarly, we randomly substitute words in the source language sentences with the homophone substitution. The result of this substitution is closer to the real speech recognition result. As shown in Table 3 . We first split the sentence in the source language into a word sequence, determine whether to replace each word with its homophones by probability p, and then sample them according to the distribution of homophones in a corpus. Finally we replace to the corresponding position.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.2"
},
{
"text": "In this system, we adopt the character and the word frequency distribution in an ASR corpus, the AISHELL-2 corpus (Du et al., 2018) , and set the substitution probability p = 0.3.",
"cite_spans": [
{
"start": 114,
"end": 131,
"text": "(Du et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.2"
},
{
"text": "Low latency is important to simultaneous machine translation. Our systems are closed to low latency translation by splitting long input word sequences into short ones. We use three sentence segmentation methods in this work, namely, punctuation based sentence segmentation (PSS), length based sentence segmentation (LSS), and sentence boundary detection model based sentence segmentation (MSS). PSS In the punctuation based sentence segmentation method we put the streaming input tokens into a buffer one by one. When the input token is a punctuation, the word sequence in the buffer is translated. Then the buffer is cleared and we put the next tokens into it. The above procedure repeats until the end of the streaming inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Segmentation",
"sec_num": "3.3"
},
{
"text": "LSS In our length based sentence segmentation method we put the steaming input tokens into a buffer one by one. When the input token is a punctuation or the sequence length in the buffer reaches a threshold L, the word sequence in the buffer except the last word is translated in case of the last word is an in complete one. The translated part in the buffer is then cleared and then we put the next tokens Original Chinese (she) English This society society hasn't trust it doesn't work Substitution (she) English This suppose society hasn't newcomers it doesn't work into the buffer. The above procedure repeats until the end of the streaming inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Segmentation",
"sec_num": "3.3"
},
{
"text": "Text Label 0 So we think that free 1 So we think that free is only temporary MSS Apparently many translation inputs with the LSS are incomplete sentences fragments because of the hard sentence segmentation. Here we propose a sentence boundary detection model for the sentence segmentation. We build this model on the top of a pretraining model, BERT (Devlin et al., 2018) . Our model is built by adding two layers of full connected network to the Chinese BERT pre-training model. The training data set is constructed using all transcription pairs provided by the organizer. For the sentences in transcriptions, we use a punctuation set, {, . ! ? }, as the sentence boundary indicators to obtain complete sentences, which are used as positive samples. And then we sample incomplete fragments from the above sentences uniformly to obtain negative samples. The ratio of the positive sample to the negative sample is 1 : 4. Table 4 illustrates a positive example and a negative example. The training set is of 370k examples, the test set is of 7k examples, and the validation set is of 7k examples. After running 3 epochs, the model converges with an accuracy of 92.5% in the test set.",
"cite_spans": [
{
"start": 350,
"end": 371,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 920,
"end": 927,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Sentence Segmentation",
"sec_num": "3.3"
},
{
"text": "We apply the sentence boundary detection model to streaming ASR output translation. The model returns the prediction to each streaming sequence as a judgment condition for whether it is to be translated. However, we should not set the segmentation point at the first position of the detection. Suppose a detected sentence boundary position is i and the next detected boundary position is i + 1. This means both of the prefix word sequences w 1:i and w 1:i+1 can be seen as a complete sentence. Usually the boundary position i + 1 is better than i. Generally we set a rule that position i is a sentence boundary if the sentence boundary detection model returns true for position i and false for i + 1. In this way, the word sequence (i.e. w 1:i ) is feed to the translation system when it is detected and the untranslated part (i.e. w i+1 ) will be translated in the next sentence. For example, the position i of streaming inputs in Table 5 are detected to boundary's position finally only when the position i is detected to boundary by model while the next position i + 1 isn't detected to boundary by model.",
"cite_spans": [],
"ref_spans": [
{
"start": 932,
"end": 939,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Segmentation",
"sec_num": "3.3"
},
{
"text": "Pre-training and fine-tuning are the most popular training methods in the field of deep learning. It has been proved that this training mode is very effective in improving the performance of the model and is very simple to implement. Therefore, we use the CWMT19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training and Fine-tuning",
"sec_num": "3.4"
},
{
"text": "Return of model Boundary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position Sentence",
"sec_num": null
},
{
"text": "i \u2212 2 False 0 i \u2212 1 True 0 i True 1 i + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position Sentence",
"sec_num": null
},
{
"text": "False 0 Table 5 : The examples of using model to detect boundaries. 0: Not boundary of sentence, 1: Boundary of sentence data set to pre-train a base-model, and then use the speech translation data provided by the organizer to fine-tune the model. We first train a basic Transformer translation model with CWMT19 data set. In order to adapt to the spoken language domain, we directly fine-tune the pre-trained model on the transcriptions or ASR outputs provided by the organizer and our augmented data. We train our model with the CWMT19 zhen data set, the streaming transcription and the streaming ASR output data sets provided by the evaluation organizer. Because of the evaluation track limit, we did not use the UN parallel corpus and the News Commentary corpus although they were used in the baseline. The CWMT19 zh-en data set includes six sub data sets: the casia2015 corpus, the casict2011 corpus, the casict2015 corpus, the datum2015 corpus, the datum2017 corpus and the neu2017 corpus. The CWMT19 data set contains totally 9,023,708 parallel sentences. They are used in the pre-training of our model. Streaming transcription and streaming ASR output data sets are provided by the evaluation organizer. The transcription data set contains 37,901 pairs and the ASR output data set contains 202,237 pairs. We use them as the fine-tuning data to adapt to the spoken language. Finally we evaluate our system on the development set which contains 956 pairs. The size of the data set is listed in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 5",
"ref_id": null
},
{
"start": 1500,
"end": 1507,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Position Sentence",
"sec_num": null
},
{
"text": "Our model is based on the transformer in tensor2tensor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Settings",
"sec_num": "4.2"
},
{
"text": "We set the parameters of the model as transf ormer_big. And we set the parameter problem as translate_enzh_wmt32k_rev. We train the model on 6 RTX-Titan GPUs for 9 days. Then we use the transcription data and the ASR output data to fine-tune the model respectively on 2 GPUs. We fine-tune the model until it overfits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Settings",
"sec_num": "4.2"
},
{
"text": "The baseline model 4 (Ma et al., 2018) provided by the evaluation organizer is trained on the WMT18 zh-en data set, including CWMT19, the UN parallel corpus, and the News Commentary corpus. The baseline model uses the transformer which is essentially the same as the base model from the original paper (Vaswani et al., 2017) . It applied a Prefix-to-Prefix architecture and Wait-K strategy to the transformer. We test the Wait-1, Wait-3 and the FULL model with fine-tuning on domain data as the comparison to our system. For the Wait-1, Wait-3 setting, the baseline fine-tunes 30,000 steps. For the FULL setting, the baseline fine-tunes 40,000 steps. Ma et al. (2018) uses Average Lagging (AL) as the latency metric. They defined:",
"cite_spans": [
{
"start": 21,
"end": 38,
"text": "(Ma et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 302,
"end": 324,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 651,
"end": 667,
"text": "Ma et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "AL g (x, y) = 1 \u03c4 g (|x|) \u03c4g(|x|) \u2211 t=1 g(t) \u2212 t \u2212 1 r",
"eq_num": "(1)"
}
],
"section": "Latency Metric: Average Lagging",
"sec_num": "4.4"
},
{
"text": "Where \u03c4 g (|x|) denotes the cut-off step which is the decoding step when source sentence finishes, g(t) denotes the number of source words processed by the encoder when deciding the target word y t , and r = |x|/|y| is the target-tosource length ratio. The lower the AL value, the lower the delay, the better the real-time system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latency Metric: Average Lagging",
"sec_num": "4.4"
},
{
"text": "The results of our streaming transcription system on the development data set are shown in Table 7 . FT-Trans indicates the fine-tuning data set including the original transcriptions and the transcriptions without punctuation (i.e. the depunctuation version). LSS-L indicates the system with the length based sentence segmentation method and the threshold for the length is L. PSS indicates the system with our punctuation based sentence segmentation method. MSS indicates the system with our sentence boundary detection model based sentence segmentation method. Wait-1, Wait-3 and FULL indicate the different settings of the baseline systems. Among these settings, the best AL score is from the Wait-1 baseline and the best BLEU score is from our PSS system. Under similar BLEU score, LSS-17 obtains better AL score than the FULL baseline. Both of the AL and the BLEU score of the LSS-L system grow up with L increases. The MSS system performs better BLEU score by 1.19 than the LSS-L system under similar AL score (i.e. MSS vs. LSS-12). Finally we submitted the PSS setting system because of its high BLEU score and relatively low AL latency compared with the FULL baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Streaming Transcription Translation",
"sec_num": "5.1"
},
{
"text": "The translation performances on the streaming ASR output are shown in Table 8 . FT-ASR represents the systems are fine-tuned on the combination of the ASR output and the ASR output without punctuation. FT-ASR+Aug represents the fine-tuning set includes the FT-ASR, the homophone substitution augmented transcriptions, and their depunctuation version. FT-ASR+Aug+Trans represents the fine-tuning set contains the FT-ASR+Aug and the transcriptions and their depunctuation version.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 8",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "As shown in Table 8 , all of our systems outperform the Wait-1, Wait-3 settings of the baseline in BLEU score and our MSS model outperforms the FULL baseline. As more data is added to the fine-tuning set, the performances of the systems will increase accordingly. Both LSS-15 and PSS in FT-ASR+Aug outperform the corresponding systems in FT-ASR, which indicates the effectiveness of the data augmentation. The BLEU score of LSS-15(FT-ASR+Aug+Trans) is 2.22 higher than LSS-15(FT-ASR) while the AL latency of former is better than the latter.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 8",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "In the FT-ASR+Aug+Trans, the sentence boundary detection model based sentence segmentation, MSS, obtains higher (i.e. +0.99) BLEU score and lower (i.e. -1.06) AL latency than the LSS-15. The BLEU score of MSS is lower than PSS by 1.46 but the latency is improved by 15.88.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "Compared with the results of transcription translation of FT-Trans in Table 7 , the BLEU scores of the ASR outputs translations relatively decreased. This indicates the effects of the cascade error of the ASR systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "The latency of the LSS in Table 7 and Table 8 are close. The latency of PSS increased from 10 to around 22. This indicates the lack of punctuation in the ASR outputs.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "The MSS system performs close AL latency and less BLEU score drops in transcription and ASR outputs translation. At last we submitted the MSS system to the evaluation track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "Several examples of the translation in differ- ent systems can be seen in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Streaming ASR Output Translation",
"sec_num": "5.2"
},
{
"text": "End-to-end machine translation models, such as transformer (Vaswani et al., 2017) , greatly promote the progress of machine translation research and have been applied to speech translation researches (Schneider and Waibel, 2019; Srinivasan et al., 2019; Wetesko et al., 2019) . Furthermore, several end-to-end based approaches have recently been proposed for simultaneous translations (Zheng et al., 2019b,a) . In order to solve the problem of insufficient parallel corpus data for simultaneous translation tasks, Schneider and Waibel (2019) augmented the available training data using backtranslation. Vial et al. (2019) used BERT pretraining model to train a large number of external monolingual data to achieve data augmentation. simulated the input noise of ASR model and used placeholders, homophones and high-frequency words to replace the original parallel corpus at the character level. Inspired by , we augment the training data by randomly replacing the words in the source sentences with homophones.",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 200,
"end": 228,
"text": "(Schneider and Waibel, 2019;",
"ref_id": "BIBREF5"
},
{
"start": 229,
"end": 253,
"text": "Srinivasan et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 254,
"end": 275,
"text": "Wetesko et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 385,
"end": 408,
"text": "(Zheng et al., 2019b,a)",
"ref_id": null
},
{
"start": 603,
"end": 621,
"text": "Vial et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In order to reduce the translation latency, Ma et al. (2018) used the Prefix-to-Prefix architecture, which predicts the target word with the prefix rather than the whole sequence. Their Wait-K models are used as the baseline and are provided by the shared task organizers. The Wait-K models start to predict the target after the first K source words appear. Zheng et al. (2020) applied ensemble of models trained with a set of Wait-K polices to achieve an adaptive policy. Xiong et al. (2019) have proposed a pre-training based segmentation method which is similar to MSS. However, in the decoding stage, the time complex of this method is O(n 2 ) whereas the time complex of MSS is O(n).",
"cite_spans": [
{
"start": 44,
"end": 60,
"text": "Ma et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 358,
"end": 377,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 473,
"end": 492,
"text": "Xiong et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we describe our submission systems to the the streaming Chinese-to-English translation task of AutoSimTrans 2020. In this system the translation model is trained on the CWMT19 data set with the transformer modedalvi2018incrementall. We leverage homophonic character and word substitutions to augment the fine-tuning speech transcription data set. We implement a punctuation based, a length based and a sentence boundary detection model based sentence segmentation methods to improve the latency of the translation system. Experimental results on the development data sets show that the punctuation based sentence segmentation obtains the best BLEU score with a reasonable latency on the transcription translation track. The results on the ASR outputs translation show the effectiveness of our data augmentation approaches. And the sentence boundary detection model based sentence segmentation gives the low latency and a stable BLEU score in our all systems. However, because we have no enough time to retrain the MT model, some settings of our system are not consistent with the baseline, so it is difficult to judge whether our method is better than baseline's method. In the future, we will finish this comparative experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "https://autosimtrans.github.io/shared",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/saffsd/langid.py 3 https://github.com/tensorflow/tensor2tensor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/autosimtrans/SimulTransBaseline",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Supported by the National Key Research and Development Program of China (No. 2016YFB0801200) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We list several translation results to compare our systems with the baselines on the transcription translation track and the ASR output translation track. As shown in Table 9 and 10, missing translation can be observed in the Wait-K baselines and our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "He has always been ranked among the last, so to speak, the last in those games. What kind of spirit supported him to take part in the competition all the time? For streaming ASR output, as shown in Table 12, missing translation can also be observed in the Wait-K baselines. From Table 13, we can see that in the segmentation of the LSS-15 most of the sentence fragments are incomplete. As shown in Table 14 , the segmentation of the MSS is reasonable and the translation is much better than the LSS-15.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Table 14",
"ref_id": null
}
],
"eq_spans": [],
"section": "Source Reference",
"sec_num": null
},
{
"text": "Translation Wait-1In his every after shock, he won the game, even in the No.1 games.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Every time when he does a match, he will lose, even in the No.1 draw, what is that? FULLIn every game, which is not only about the win, but also about the power that comes to the 1st place, those who support him to go on training all the time. FT-Trans(PSS) In every game he lost, in the second countdown, what is it? What was the strength that kept him going? I keep training. ",
"cite_spans": [
{
"start": 244,
"end": 257,
"text": "FT-Trans(PSS)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wait-3",
"sec_num": null
},
{
"text": "So, is everyone wants to fail? Wait-3Right, everyone never want to fail, and they all want to win every game, even when they are in the second best. FULL That is, to say, every one would never want to win, in every game, or even in the second place, what was the power that supports him to go there and that number? Table 12 : The translations of the sentence in Table 11 .",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 324,
"text": "Table 12",
"ref_id": null
},
{
"start": 363,
"end": 371,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Translation",
"sec_num": null
},
{
"text": "Yes, everyone wants to lose. The winner lost every game. What is second to last? What kind of strength supports him to go on? The game. Table 11 with the setting of LSS-15 on FT-ASR+Aug+Trans.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation Translation",
"sec_num": null
},
{
"text": "Right? Everyone doesn't want to lose. They all want to win. In each game, it is losing or even losing. In the second place. What is power? It supports him to go all the way to the game. Table 11 with the setting of MSS on FT-ASR+Aug+Trans.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation Translation",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Naver labs europe's systems for the wmt19 machine translation robustness task",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Ioan",
"middle": [],
"last": "Calapodescu",
"suffix": ""
},
{
"first": "Claude",
"middle": [],
"last": "Roux",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.06488"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Ioan Calapodescu, and Claude Roux. 2019. Naver labs europe's systems for the wmt19 machine translation robustness task. arXiv preprint arXiv:1907.06488.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "AISHELL-2: transforming mandarin ASR research into industrial scale",
"authors": [
{
"first": "Jiayu",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Na",
"suffix": ""
},
{
"first": "Xuechen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Bu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiayu Du, Xingyu Na, Xuechen Liu, and Hui Bu. 2018. AISHELL-2: transforming man- darin ASR research into industrial scale. CoRR, abs/1808.10583.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improving the robustness of speech translation",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haiyang",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00728"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, and Qun Liu. 2018. Improving the ro- bustness of speech translation. arXiv preprint arXiv:1811.00728.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "STACL: simultaneous translation with integrated anticipation and controllable latency",
"authors": [
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, and Haifeng Wang. 2018. STACL: simultaneous translation with integrated an- ticipation and controllable latency. CoRR, abs/1810.08398.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Kit s submission to the iwslt 2019 shared task on text translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 16th International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Schneider and Alex Waibel. 2019. Kit s sub- mission to the iwslt 2019 shared task on text translation. In Proceedings of the 16th Interna- tional Workshop on Spoken Language Transla- tion.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cmu s machine translation system for iwslt",
"authors": [
{
"first": "Tejas",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Sanabria",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tejas Srinivasan, Ramon Sanabria, and Florian Metze. 2019. Cmu s machine translation sys- tem for iwslt 2019.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The lig system for the english-czech text translation task of iwslt",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Vial",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
},
{
"first": "Didier",
"middle": [],
"last": "Schwab",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Vial, Benjamin Lecouteux, Didier Schwab, Hang Le, and Laurent Besacier. 2019. The lig system for the english-czech text translation task of iwslt 2019.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Samsung and university of edinburgh s system for the iwslt",
"authors": [
{
"first": "Joanna",
"middle": [],
"last": "Wetesko",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Chochowski",
"suffix": ""
},
{
"first": "Pawel",
"middle": [],
"last": "Przybysz",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joanna Wetesko, Marcin Chochowski, Pawel Przy- bysz, Philip Williams, Roman Grundkiewicz, Rico Sennrich, Barry Haddow, Antonio Valerio Miceli Barone, and Alexandra Birch. 2019. Sam- sung and university of edinburgh s system for the iwslt 2019.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dutongchuan: Context-aware translation model for simultaneous interpreting",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.12984"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Xiong, Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Dutongchuan: Context-aware transla- tion model for simultaneous interpreting. arXiv preprint arXiv:1907.12984.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Simultaneous translation policies: From fixed to adaptive",
"authors": [
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Si- multaneous translation policies: From fixed to adaptive.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Simpler and faster learning of adaptive policies for simultaneous translation",
"authors": [
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.01559"
]
},
"num": null,
"urls": [],
"raw_text": "Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019a. Simpler and faster learning of adaptive policies for simultaneous translation. arXiv preprint arXiv:1909.01559.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simultaneous translation with flexible policy via restricted imitation learning",
"authors": [
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5816--5822",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1582"
]
},
"num": null,
"urls": [],
"raw_text": "Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019b. Simultaneous translation with flexible policy via restricted imitation learn- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguis- tics, pages 5816-5822, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"html": null,
"num": null,
"text": "An example of streaming ASR output translations.",
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td>Original Chinese</td><td>(xinren)</td><td/></tr><tr><td>English</td><td>This society hasn't trust</td><td>it doesn't work</td></tr><tr><td>Substitution</td><td>(xinren)</td><td/></tr><tr><td>English</td><td>This society hasn't newcomers</td><td>it doesn't work</td></tr></table>",
"html": null,
"num": null,
"text": "A randomly selected single character (in red bold font) is substituted by its homophonic character. The corresponding pinyin is included in the bracket.",
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"html": null,
"num": null,
"text": "A randomly selected word (in red bold font) is substituted by its homophonic word. The corresponding pinyin is included in the bracket.",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Examples of the train data set of the model. 1: Complete sentences. 0: Incomplete sentence.",
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"html": null,
"num": null,
"text": "The size of different data sets.",
"type_str": "table"
},
"TABREF7": {
"content": "<table/>",
"html": null,
"num": null,
"text": "The translation results on the development data set of streaming transcriptions.",
"type_str": "table"
},
"TABREF9": {
"content": "<table/>",
"html": null,
"num": null,
"text": "The translation results on the development data set of streaming ASR outputs.",
"type_str": "table"
}
}
}
}