{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:43.549332Z" }, "title": "System Description on Automatic Simultaneous Translation Workshop", "authors": [ { "first": "Zecheng", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Zhejiang University", "location": { "settlement": "Hangzhou", "country": "China" } }, "email": "lizechng@zju.edu.cn" }, { "first": "Sun", "middle": [], "last": "Yue", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xiamen University", "location": { "settlement": "Xiamen", "country": "China" } }, "email": "" }, { "first": "Haoze", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "North China Institute of Aerospace Engineering", "location": { "settlement": "Langfang", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes our system submitted on the third automatic simultaneous translation workshop at NAACL2022. We participate in the Chinese audio\u2192English text direction of Chinese-to-English translation. Our speech-totext system is a pipeline system, in which we resort to rhymological features for audio split, ASRT model for speech recoginition, STACL model for streaming text translation. To translate streaming text, we use wait-k policy trained to generate the target sentence concurrently with the source sentence, but always k words behind. We propose a competitive simultaneous translation system and rank 3rd in the audio input track. The code will release soon.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper describes our system submitted on the third automatic simultaneous translation workshop at NAACL2022. We participate in the Chinese audio\u2192English text direction of Chinese-to-English translation. Our speech-totext system is a pipeline system, in which we resort to rhymological features for audio split, ASRT model for speech recoginition, STACL model for streaming text translation. To translate streaming text, we use wait-k policy trained to generate the target sentence concurrently with the source sentence, but always k words behind. We propose a competitive simultaneous translation system and rank 3rd in the audio input track. The code will release soon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Simultaneous translation refers to translating the message from the speaker to the audience in realtime without interrupting the speakers, which is a challenging task and has become an increasingly popular research field in recent years.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe our system submitted at the 3rd automatic simultaneous translation workshop, which consists of a rhymeological features based audio split model, an end to end speech recognition model and a wait-k (Ma et al., 2019) based streaming text translation model. The system input is Chinese audio file and the output is English translation text. A temporary Streaming transcription is obtained by audio split and speech recognition model, then transmitted into machine translation model to get the target system output.", "cite_spans": [ { "start": 224, "end": 241, "text": "(Ma et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For automatic audio split model, we calculate the rhythmological features (Weninger et al., 2013) of the audio input, resort to adaptive policy to set short-term energy threshold and zero crossing rate threhold for speech split. For automatic speech recognition model, we use ASRT model 1 , which is based DCNN model and CTC decoder (Graves et al., 2006) . Whilst, we expand the training data set by adding Aishell-1 (Bu et al., 2017) and Thchs-30 (Wang and Zhang, 2015) datasets. For streaming text translation, our model is based on STACL (Ma et al., 2019) . We use some human rules and the pre-trained language model to filter the parallel corpus. At the step of inference, we apply the waitk words policy. Both the pre-processing and postprecessing are applied to improve the terminology translation and deal with the word error produced by the ASR system.", "cite_spans": [ { "start": 74, "end": 97, "text": "(Weninger et al., 2013)", "ref_id": "BIBREF8" }, { "start": 333, "end": 354, "text": "(Graves et al., 2006)", "ref_id": "BIBREF2" }, { "start": 417, "end": 434, "text": "(Bu et al., 2017)", "ref_id": "BIBREF0" }, { "start": 448, "end": 470, "text": "(Wang and Zhang, 2015)", "ref_id": "BIBREF7" }, { "start": 541, "end": 558, "text": "(Ma et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since our submission is a pipeline system, the rest of this paper describes separately regards to audio split, automatic speech recognition and matchine translation sub-modules. We firstly describe the training and development datasets we used, then the data precessing methods is introduces. Secondly, we describe our system architecture and experiment results. Lastly, we draw a conclusion of our system by analyzing the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For audio data of ASR, we use qianyan audio datasets provided by NAACL workshop (Zhang et al., 2021) , Aishell-1 (Bu et al., 2017) , Thchs-30 (Wang and Zhang, 2015) . For text data of MT, we use CWMT19 2 and the simultaneous translation corpus provided by the organizer of the workshop.", "cite_spans": [ { "start": 80, "end": 100, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF9" }, { "start": 113, "end": 130, "text": "(Bu et al., 2017)", "ref_id": "BIBREF0" }, { "start": 142, "end": 164, "text": "(Wang and Zhang, 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "For qianyan audio datasets, we split each audio into sentences according to the sentence-level transcription. After processing, the blank part of all entire audio files was removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Audio data", "sec_num": "2.1" }, { "text": "For other datasets, we firstly deal with transcription files by using rules to get path and filename of every transcription. Then using wave library to read audio files to get the duration time of each audio.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Audio data", "sec_num": "2.1" }, { "text": "Size Qianyan(NAACL) 65h 5.4G Aishell-1 178h 14.51G Thchs-30 40h 6.01G Table 1 : Zh-En audio training datasets.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Duration", "sec_num": null }, { "text": "In order to mitigate the matching issues between audio file and transcription text, we use pre-trained ASRT model to produce pronunciation results from audio input, and then obtain streaming text from pronumciation models. Table 1 shows the number of training data.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Duration", "sec_num": null }, { "text": "For CWMT19 and Baidu Speech Translation Corpus(BSTC) (Zhang et al., 2021 ) datasets, we firstly filter out the sentence whose English sentence is longer than 120 words. Meanwhile, there are a few Chinese characters in the data which are traditional characters. We convert them to simplified ones. Then all Chineses sentences are segmented with Jieba Chinese Segmentation Tool 3 and all English sentences are tokenized and truecased with Moses 4 . Lastly, Both Chinese and English data are encoded by BPE (Sennrich et al., 2015) with Subword-NMT 5 to train a bytes pairs encoding model.", "cite_spans": [ { "start": 53, "end": 72, "text": "(Zhang et al., 2021", "ref_id": "BIBREF9" }, { "start": 504, "end": 527, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Text data", "sec_num": "2.2" }, { "text": "Our system consist of a rhymeological features based audio split model, an end to end speech recognition model, and a wait-k based streaming text translation model. The model training process for speech recognition and machine translation model are implemented on a device with four GPUs of Nvidia 1080ti.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System description", "sec_num": "3" }, { "text": "For automatic audio split model, we use the traditional acoustic methods. We firstly calculate the rhythmological features of the audio input based on Librosa audio processing library 6 and the openS-MILE toolkit (Eyben et al., 2010) . According to short-term energy and zero crossing rate of the rhythmological features, we can detect the endpoint of voice. This can detect all valid speech parts of a Figure 1 : Audio Split Process. The solid red line is the reult of Step-1, and the dashed green line is the result of", "cite_spans": [ { "start": 213, "end": 233, "text": "(Eyben et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 403, "end": 411, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Audio split", "sec_num": "3.1" }, { "text": "Step-2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Audio split", "sec_num": "3.1" }, { "text": "Step-1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter", "sec_num": null }, { "text": "Step-2 Frame length 400 240 Min. turbid interval 25 20 Short-term energy threhold 1.0 0.4 Zero crossing rate threhold 0.8 1.2 section of speech. The endpoint detection consists of two steps. The first step is the overall endpoint detection used to segment the long audio file, the second step is the fine-tune of the splited audio. The audio split process is shown in Figure 1 . The super-parameters we use are shown in the Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 368, "end": 376, "text": "Figure 1", "ref_id": null }, { "start": 424, "end": 431, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Parameter", "sec_num": null }, { "text": "The speech recognition model we use is ASRT model, based on deep convolutional neural network and long-short memory neural network, attention mechanism and CTC to implement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech recognition", "sec_num": "3.2" }, { "text": "We firstly limit the maximum length of splited audio to 16 seconds, as the input of ASRT model. The speech recognition model will output the corresponding pronunciation sequence. Then we resort to probability map based maximum entropy Markov model to convert the pronunciation sequence to recogized text. To improve the recognition accuracy, we use the model pre-trained on AiShell-1 and Thchs-30 datasets and fine-tune on audio dataset provided by NAACL workshop. We list the model configuration in Table 3 3", "cite_spans": [], "ref_spans": [ { "start": 500, "end": 507, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Speech recognition", "sec_num": "3.2" }, { "text": "We use STACL as our machine translation model. We train the model for over two days, Audio length 1600 Feature length 200 Label length 64 Channels 1 Output size 1428 Optimizer Adam (Papineni et al., 2002) score increased rapidly at the beginning and the growth slowed after 20 hours. After the loss converged, we save the last checkpoint as the final model. We list the model configuration in Table 4 and training parameters in Table 5 . The simultaneous policy we use is wait-k, which first wait k source words, and then translates concurrently with the reset of source sentence, i.e., the output is always k words behind the input.", "cite_spans": [ { "start": 189, "end": 212, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 85, "end": 168, "text": "Audio length 1600 Feature length 200 Label length 64 Channels 1 Output size", "ref_id": "TABREF0" }, { "start": 401, "end": 408, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 436, "end": 443, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": ".3 Machine translation", "sec_num": null }, { "text": "We implement fine-tuning on the STACL model using the BSTC dataset to improve the translation quality on simultaneous translation task. Since fine-tuning is effective to build a domain-adaptive model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Configuration Value", "sec_num": null }, { "text": "In this section, we evaluate our system on the development set of the Baidu Speech Translation Corpus. The two used metrics are case-sensitive detokenized BLEU (Papineni et al., 2002) and Consecutive Wait(CW) (Gu et al., 2016) , for translation quality and latency respectively. CW considers on", "cite_spans": [ { "start": 160, "end": 183, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF5" }, { "start": 209, "end": 226, "text": "(Gu et al., 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "Value Label smoothing 0.1 Learning rate 2.0 Warmup steps 8000 Maximum sentence length 120 how many source words are waited for consecutively between two target words, and thus larget CW means longer latency. We set the threshold k in the wait-k policy to various values and get multiple results, as shown in Figure 2 . Due to the speech in the development set is difficult for ASR model trained ourselves, resulting in a high character error rate. The errors caused by ASR are brought to MT, and thus the BLEU is much lower than that in the text-to-text track.", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 316, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parameter", "sec_num": null }, { "text": "This paper describe our submission to the 3rd automatic simultaneous workshop at NAACL2022. We detail our process of data filtering and model training. The Consecutive Wait(CW) of the best point reached to 14.06, while we get the BLEU value of 6.17 in the audio input track. In future work, we will continue to research on end-to-end speech translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/nl8590687/ASRT_SpeechRecognition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://mteval.cipsc.org.cn:81/agreement/description", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/fxsjy/jieba 4 https://github.com/moses-smt/mosesdecoder 5 https://github.com/rsennrich/subword-nmt 6 https://github.com/librosa/librosa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline", "authors": [ { "first": "Hui", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Jiayu", "middle": [], "last": "Du", "suffix": "" }, { "first": "Xingyu", "middle": [], "last": "Na", "suffix": "" }, { "first": "Bengu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2017, "venue": "the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng. 2017. Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline. In 2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA), pages 1-5. IEEE.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Opensmile: the munich versatile and fast opensource audio feature extractor", "authors": [ { "first": "Florian", "middle": [], "last": "Eyben", "suffix": "" }, { "first": "Martin", "middle": [], "last": "W\u00f6llmer", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Schuller", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 18th ACM international conference on Multimedia", "volume": "", "issue": "", "pages": "1459--1462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Eyben, Martin W\u00f6llmer, and Bj\u00f6rn Schuller. 2010. Opensmile: the munich versatile and fast open- source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia, pages 1459-1462.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Faustino", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "369--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to translate in real-time with neural machine translation", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1610.00388" ] }, "num": null, "urls": [], "raw_text": "Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- tor OK Li. 2016. Learning to translate in real-time with neural machine translation. arXiv preprint arXiv:1610.00388.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework", "authors": [ { "first": "Mingbo", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Renjie", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Kaibo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Baigong", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hairong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3025--3036", "other_ids": { "DOI": [ "10.18653/v1/P19-1289" ] }, "num": null, "urls": [], "raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous trans- lation with implicit anticipation and controllable la- tency using prefix-to-prefix framework. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311-318.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.07909" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Thchs-30: A free chinese speech corpus", "authors": [ { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xuewei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1512.01882" ] }, "num": null, "urls": [], "raw_text": "Dong Wang and Xuewei Zhang. 2015. Thchs-30: A free chinese speech corpus. arXiv preprint arXiv:1512.01882.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the acoustics of emotion in audio: what speech, music, and sound have in common", "authors": [ { "first": "Felix", "middle": [], "last": "Weninger", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Eyben", "suffix": "" }, { "first": "W", "middle": [], "last": "Bj\u00f6rn", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Schuller", "suffix": "" }, { "first": "Klaus", "middle": [ "R" ], "last": "Mortillaro", "suffix": "" }, { "first": "", "middle": [], "last": "Scherer", "suffix": "" } ], "year": 2013, "venue": "Frontiers in psychology", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Weninger, Florian Eyben, Bj\u00f6rn W Schuller, Mar- cello Mortillaro, and Klaus R Scherer. 2013. On the acoustics of emotion in audio: what speech, music, and sound have in common. Frontiers in psychology, 4:292.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bstc: A large-scale chinese-english speech translation dataset", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiyang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qinfei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.03575" ] }, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Xiyang Wang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Zhi Li, Haifeng Wang, Ying Chen, and Qinfei Li. 2021. Bstc: A large-scale chinese-english speech translation dataset. arXiv preprint arXiv:2104.03575.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Experimental Result of speech-to-text track", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "text": "", "num": null, "html": null, "content": "" }, "TABREF1": { "type_str": "table", "text": "Speech recognition model configuration", "num": null, "html": null, "content": "
ConfigurationValue
Encoder/Decoder depth6
Attention heads8
Word Embedding512
Chinese Vacabulary size 10000
English Vacabulary size 10000
OptimizerAdam
" }, "TABREF2": { "type_str": "table", "text": "Machine translation model configuration the BLEU", "num": null, "html": null, "content": "" }, "TABREF3": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
" } } } }