{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:09:45.318619Z" }, "title": "Findings of the Third Workshop on Automatic Simultaneous Translation", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Oregon State University", "location": {} }, "email": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "Huawei Noah's Ark Lab", "institution": "", "location": {} }, "email": "" }, { "first": "Julia", "middle": [], "last": "Ive", "suffix": "", "affiliation": { "laboratory": "", "institution": "Queen Mary University of London", "location": {} }, "email": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper reports the results of the shared task we hosted on the Third Workshop of Automatic Simultaneous Translation (AutoSim-Trans). The shared task aims to promote the development of text-to-text and speech-to-text simultaneous translation, and includes Chinese-English and English-Spanish tracks. The number of systems submitted this year has increased fourfold compared with last year. Additionally, the top 1 ranked system in the speech-totext track is the first end-to-end submission we have received in the past three years, which has shown great potential. This paper reports the results and descriptions of the 14 participating teams, compares different evaluation metrics, and revisits the ranking method.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper reports the results of the shared task we hosted on the Third Workshop of Automatic Simultaneous Translation (AutoSim-Trans). The shared task aims to promote the development of text-to-text and speech-to-text simultaneous translation, and includes Chinese-English and English-Spanish tracks. The number of systems submitted this year has increased fourfold compared with last year. Additionally, the top 1 ranked system in the speech-totext track is the first end-to-end submission we have received in the past three years, which has shown great potential. This paper reports the results and descriptions of the 14 participating teams, compares different evaluation metrics, and revisits the ranking method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Simultaneous translation (ST), which aims to perform translation from source language speech into the target language with high quality and low latency, is widely used in many scenarios, such as international conferences, live broadcasts, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generally, the research of ST falls into two categories: the cascade method, and the end-to-end method. A typical cascade ST system consists of an automatic speech recognition (ASR) system that transcribes the source speech into streaming text (Moritz et al., 2020; Wang et al., 2020a; , a machine translation (MT) system that translates the text into the target language, and a policy module lies in between them to decide when to start translation (Oda et al., 2014; Dalvi et al., 2018; Ma et al., 2019; Arivazhagan et al., 2019; Zhang et al., 2020; Wilken et al., 2020) . Another branch of work proposed end-to-end ST methods that attempt to translate from source speech to target text directly without transcribing the source speech (Bansal et al., 2018; Di Gangi et al., 2019; Jia et al., 2019) .", "cite_spans": [ { "start": 244, "end": 265, "text": "(Moritz et al., 2020;", "ref_id": "BIBREF21" }, { "start": 266, "end": 285, "text": "Wang et al., 2020a;", "ref_id": "BIBREF32" }, { "start": 450, "end": 468, "text": "(Oda et al., 2014;", "ref_id": "BIBREF22" }, { "start": 469, "end": 488, "text": "Dalvi et al., 2018;", "ref_id": "BIBREF6" }, { "start": 489, "end": 505, "text": "Ma et al., 2019;", "ref_id": "BIBREF19" }, { "start": 506, "end": 531, "text": "Arivazhagan et al., 2019;", "ref_id": "BIBREF1" }, { "start": 532, "end": 551, "text": "Zhang et al., 2020;", "ref_id": "BIBREF44" }, { "start": 552, "end": 572, "text": "Wilken et al., 2020)", "ref_id": "BIBREF35" }, { "start": 737, "end": 758, "text": "(Bansal et al., 2018;", "ref_id": "BIBREF2" }, { "start": 759, "end": 781, "text": "Di Gangi et al., 2019;", "ref_id": "BIBREF7" }, { "start": 782, "end": 799, "text": "Jia et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We host a shared task at the Third AutoSimTrans Workshop to promote the exploration of advanced ST approaches. The shared task is built on the past two editions (Wu et al., 2020; Zhang et al., 2021c) . We set up three tracks this year:", "cite_spans": [ { "start": 161, "end": 178, "text": "(Wu et al., 2020;", "ref_id": "BIBREF44" }, { "start": 179, "end": 199, "text": "Zhang et al., 2021c)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Chinese-English Text-to-text ST track,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where participants are asked to generate realtime English translation based on the input Chinese text. The input is derived from human-annotated transcriptions of TED-like lectures, which contain speech disfluencies but no ASR errors. We simulate streaming speech recognition results by a series of prefixes, where each n-word transcription is represented by n sentence prefixes whose lengths increase from 1 to n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Chinese-English Speech-to-text track considers real ST scenarios that need real-time translation directly from speech. The participants can adopt either cascade or end-to-end systems. The test sets for the first two tracks are from the same set of audio so that the test results may capture the differences brought by different input modalities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 English-Spanish Text-to-text track is newly added this year. We use the UN Parallel corpus 1 for train and test, which is composed of official records of the United Nations and other parliamentary documents, with no disfluencies and no ASR errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The objective of ST systems is to achieve high translation quality with low latency. During the evaluation period, each participant can submit once a day. To examine their quality-latency trade-off ability, the submission of each track is required to contain multiple folders with different policies and varying latency. Our platform supports automatic evaluation and plots the result of each folder to one point on a latency-quality diagram. We've received 24 submissions from 14 teams this year, 4 times as many as last year. The 14 participants are listed in Table 1 . We analyze the submissions and get the following findings:", "cite_spans": [], "ref_spans": [ { "start": 562, "end": 569, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The translation quality of the systems, both pipeline and end-to-end in the speech-to-text track lags behind the text-to-text track by more than 9.0 BLEU. This suggests the necessity of exploring robust speech translation systems for pragmatic ST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We receive an end-to-end ST submission for the first time in three years, which outperforms all pipeline-based systems submitted this year, representing the potential of end-toend ST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Experiments comparing multiple quality estimation metrics suggest that BLEURT may be more suitable for ST than BLEU given that it correlates best with human ratings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will introduce the details of the three tracks (Section 2), report and analyze the submissions (Section 3) , and finally compare and analyze evaluation and ranking metrics (Section 4).", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 109, "text": "(Section 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first introduce the corpora used in the shared task, then describe the system evaluation method, as well as the differences compared with the past editions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared Task", "sec_num": "2" }, { "text": "The corpora provided for training and evaluation are listed in Table 2 . For the first two tracks for Zh\u2192En ST, we provide a large-scale text translation corpus, CWMT19 2 , along with a speech translation dataset, BSTC (Zhang et al., 2021b) . CWMT19 contains 9 million of Zh\u2192En sentence pairs collected from web, bilingual books, movies, law documents, etc. BSTC contains 70.41 hours of Mandarin speeches from three TED-like content producers, corresponding to about 40K source sentences. Compared with last year, we expand the testset of BSTC from 6 talks (1.46 hours) to 20 talks (4.26 hours). For En\u2192Es ST, we use a text translation corpus, the United Nations Parallel Corpus (UN) 3 to simulate the ST scenario. All data can be obtained at the site of our shared task 4 after registration.", "cite_spans": [ { "start": 219, "end": 240, "text": "(Zhang et al., 2021b)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "2.1" }, { "text": "The two text-to-text tracks restrict participants to use the provided corpora only, while the speech-totext track allows the use of additional ASR datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2.1" }, { "text": "The ST systems are evaluated with respect to translation quality and latency. For translation quality, BLEU (Papineni et al., 2002) is the most commonly used metric. Although some net-based approaches such as BERTScore and BLEURT (Sellam et al., 2020) Table 2 : The summary of our provided corpora. We calculate the number of talks (Talks), number of sentence pairs (Utterances), number of words 5 in transcription and translation, and the duration of the speeches in corresponding corpora.", "cite_spans": [ { "start": 108, "end": 131, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" }, { "start": 230, "end": 251, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "System Evaluation", "sec_num": "2.2" }, { "text": "proven to be superior to BLEU in text translation, little work has conducted experiments or used them to evaluate ST systems. For the evaluation of latency, recent work have proposed some metrics like Average Proportion (AP) (Cho and Esipova, 2016) , Average Lagging (AL) (Ma et al., 2019) , Consecutive Wait (CW) (Gu et al., 2017) and Differentiable Average Lagging (DAL) (Arivazhagan et al., 2019) . In our shared task this year, we adopt AL-BLEU and CW-BLEU to evaluate systems in the text-totext tracks and the speech-to-text track, respectively. AL takes the number of words that the target lags behind the source speaker to estimate the degree of delay. It simulates an ideal policy that generates translation at the same speed as the speaker's utterance and measures the average number of words that lags behind this ideal policy. CW measures the average duration between every two WRITE operations by calculating the average number of source words being waited for.", "cite_spans": [ { "start": 225, "end": 248, "text": "(Cho and Esipova, 2016)", "ref_id": "BIBREF5" }, { "start": 272, "end": 289, "text": "(Ma et al., 2019)", "ref_id": "BIBREF19" }, { "start": 314, "end": 331, "text": "(Gu et al., 2017)", "ref_id": "BIBREF11" }, { "start": 373, "end": 399, "text": "(Arivazhagan et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "System Evaluation", "sec_num": "2.2" }, { "text": "We will conduct experiments and discuss alternative metrics for evaluating translation quality and latency in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Evaluation", "sec_num": "2.2" }, { "text": "Submission: Each team can participate in multiple tracks. Participants in each track are ranked independently. Different from previous editions, the input of the testsets this year is no longer invisible. Participants only need to submit the simultaneous translation results of the testset to our platform, rather than Docker projects. Before the final submission, participants can submit once a day to view their results and those of other teams on the leaderboard. Each submission needs to contain N (N \u2265 1) folders containing the ST results with different policies or models. The submissions will 5 Record the number of characters in the Transcriptions for Chinese.", "cite_spans": [ { "start": 600, "end": 601, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Submission and Ranking", "sec_num": "2.3" }, { "text": "be evaluated automatically and plotted to N points on the latency-quality graph. N is determined by the teams themselves. Ranking: Intuitively, a system is considered better if it generates higher quality results under the same delay or achieves a lower delay when generating results of the same quality. In the shared task, we rank submitted systems based on the Iterative Monotonic Optimal Sequence (I-MOS) algorithm (Zhang et al., 2021c) . It iteratively searches for a monotonic optimal sequence (MOS), which contains the points with the best translation quality at corresponding delays. Teams that have points selected on the MOS in the k th iteration are classified to the k th level, then removed from the candidate teams in the k + 1 th iteration. All teams of the k th level rank higher than that of the k + 1 th level. Teams belonging to the same level are ranked according to the proportion of points on the MOS.", "cite_spans": [ { "start": 419, "end": 440, "text": "(Zhang et al., 2021c)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Submission and Ranking", "sec_num": "2.3" }, { "text": "In addition to setting up a new En-Es text-to-text ST track, this year's shared task has the following two differences compared with the past editions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences With Past Editions", "sec_num": "2.4" }, { "text": "\u2022 Participants submit ST results instead of docker projects, which is much easier for participants. For this, we released the audios and corresponding transcription for the first two tracks of Zh-En ST and extended the testset from 6 talks to 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences With Past Editions", "sec_num": "2.4" }, { "text": "\u2022 This year's shared task allows each team to submit once per day, rather than only once in the entire challenge period. We developed an automated evaluation platform, enabling participants to access their evaluation results in real-time. 3 System Results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences With Past Editions", "sec_num": "2.4" }, { "text": "The first two tracks are for Chinese-English ST from Chinese text and speech, respectively. We've received submissions from 13 teams: 13 entered the text-to-text track and 4 of them also participated in the speech-to-text track. Their latency-quality trade-off results are plotted in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 292, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Chinese-English Simultaneous Translation", "sec_num": "3.1" }, { "text": "The ranking of the 13 participants in the Zh\u2192En text-to-text track is shown in Table 3 . We list the approaches used by some of the participants as follows:", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Text-to-text track", "sec_num": "3.1.1" }, { "text": "\u2022 BIT-Xiaomi changed the granularity in wait-k policy (Ma et al., 2019) from Chinese characters to words. They proposed to train a streaming word segmentation model to detect Chinese word boundaries in real-time, and performed prefix-to-prefix training of wait-k according to the number of words. The MT model is a Transformer-big (Vaswani et al., 2017) model trained with data selection, data augmentation (Sennrich et al., 2015) , R-drop , and noise adding strategies to improve the model's robustness.", "cite_spans": [ { "start": 54, "end": 71, "text": "(Ma et al., 2019)", "ref_id": "BIBREF19" }, { "start": 331, "end": 353, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF31" }, { "start": 407, "end": 430, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "The Text-to-text track", "sec_num": "3.1.1" }, { "text": "\u2022 USST-ECUST (Zhu and Yu, 2022) adopted the Transformer with 12 encoders and 6 decoders as the MT model, which is pretrained on a large-scale Zh-En corpus contain-Rank Team Score 1 Huawei 2.00 2 BIT-Xiaomi 1.50 3 ZXN 1.00 3 HAU 1.00 Table 4 : The ranking of the Zh\u2192En speech-to-text ST track.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "The Text-to-text track", "sec_num": "3.1.1" }, { "text": "ing 9 million sentence pairs from CWMT19 and 5.7 million pairs of pseudo data generated through self-training and back-translation (Sennrich et al., 2015; Edunov et al., 2018) . The model is then finetuned with prefix-to-prefix training (Ma et al., 2019 ) on a mixture of BSTC corpus and a subset of CWMT19 that is most similar to BSTC for better domain adaptation.", "cite_spans": [ { "start": 131, "end": 154, "text": "(Sennrich et al., 2015;", "ref_id": "BIBREF27" }, { "start": 155, "end": 175, "text": "Edunov et al., 2018)", "ref_id": "BIBREF8" }, { "start": 237, "end": 253, "text": "(Ma et al., 2019", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "The Text-to-text track", "sec_num": "3.1.1" }, { "text": "\u2022 HAU (Zhang, 2022) trained a prefix-to-prefix model using the wait-k policy with k = 1 and 3 in the text-to-text simultaneous translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Text-to-text track", "sec_num": "3.1.1" }, { "text": "The ranking of the 4 participants in the Zh\u2192En speech-to-text track is listed in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "\u2022 Huawei (Zeng et al., 2022) built an end-toend simultaneous translation model based on RealTranS (Zeng et al., 2021) . It includes a CTC-guided acoustic encoder, a semantic encoder, and a translation decoder. The acoustic encoder is initialized from a pre-trained ASR model, and the semantic encoder and the translation decoder are initialized from a pretrained NMT model. In the fine-tuning stage, they first generated pseudo ST training data by translating the transcripts of 20,000 hours of in-house ASR corpora into the target text, then train the model with the multi-path waitk (Elbayad et al., 2019) policy on the pseudo data together with BSTC.", "cite_spans": [ { "start": 9, "end": 28, "text": "(Zeng et al., 2022)", "ref_id": "BIBREF40" }, { "start": 98, "end": 117, "text": "(Zeng et al., 2021)", "ref_id": "BIBREF39" }, { "start": 585, "end": 607, "text": "(Elbayad et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "\u2022 BIT-Xiaomi ) took a pipeline system. The audio inputs are firstly segmented by Silero-VAD (Team, 2021), then sent to a Transformer-based ASR model trained on AISHELL-1 (Bu et al., 2017) and BSTC (Zhang et al., 2021b) . The recognized text is then sent to the policy model and the MT model to decide when to translate and produce a translation. The MT model and the The evaluation results of the first two tracks. The order in the legend (line by line) denotes the ranking result, which is calculated by the I-MOS algorithm. It iteratively builds the monotonic optimal sequence (MOS) of level k (MOS-k) and classifies teams that have points on it to the k th level. We use points of the same color but different shapes to represent the results of teams belonging to the same level, and the teams are ranked according to the proportion of points on the corresponding MOS.", "cite_spans": [ { "start": 170, "end": 187, "text": "(Bu et al., 2017)", "ref_id": "BIBREF3" }, { "start": 197, "end": 218, "text": "(Zhang et al., 2021b)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "policy module are the same as they used in the text-to-text track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "\u2022 ZXN ) developed a pipeline system with an audio segmentation model, an ASR system, and a wait-k based MT model. The audio segmentation model performs endpoint detection (EPD) based on short-term energy and zero-crossing rate (Rabiner and Sambur, 1975 ). The ASR system includes a convolutional model with a CTC decoder (Graves et al., 2006) to generate pinyin sequences, followed by a language model based on the maximum entropy markov model (MEMM) to produce Chinese characters. The MT model adopts Transformer-base (Vaswani et al., 2017) and is trained with the prefix-toprefix mode. The ASR model is pre-trained on AISHELL-1 and Thchs-30 (Wang and Zhang, 2015) , and the MT model is pre-trained on CWMT19, then both are fine-tuned on the BSTC.", "cite_spans": [ { "start": 227, "end": 252, "text": "(Rabiner and Sambur, 1975", "ref_id": "BIBREF25" }, { "start": 321, "end": 342, "text": "(Graves et al., 2006)", "ref_id": "BIBREF10" }, { "start": 519, "end": 541, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF31" }, { "start": 643, "end": 665, "text": "(Wang and Zhang, 2015)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "\u2022 HAU (Zhang, 2022) also took a pipeline system. They adopted DeepSpeech2 (Amodei et al., 2016) as the ASR model, which is trained on AISHELL-1 only without further fine-tuning on BSTC. The ST policy and the MT model they used are the same as they used in the text-to-text track. Table 5 lists the highest translation quality achieved by BIT-Xiaomi and Huawei, the two best performing teams on the two tracks. Compared to their performance on the text-to-text track, their speech-to-text systems both have a BLEU degradation of over 9 points. This quality gap is brought about by different input modalities. The speech-totext systems receive audio as input, so they need an ASR model to transcribe the audio, or an endto-end speech translation model to generate translation directly from speech. The pipeline systems have the problem of error propagation, and the performance of the end-to-end systems is limited by data scarcity.", "cite_spans": [ { "start": 6, "end": 19, "text": "(Zhang, 2022)", "ref_id": "BIBREF47" }, { "start": 74, "end": 95, "text": "(Amodei et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 280, "end": 287, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "This also gives us some hints that the processing of speech may be the most significant factor affecting the effect of simultaneous translation in real scenes. Some work has attempted to improve the pipeline systems by introducing an ASR error correction model (Leng et al., 2021; Zhang et al., 2021a) , others proposed pre-training approaches to alleviate the data scarcity problem of speech translation corpora in end-to-end systems Pino et al., 2020; Zheng et al., 2021; Li et al., 2020b; . We hope to see more participants in future workshops investigating how to close the performance gap between the two tracks.", "cite_spans": [ { "start": 261, "end": 280, "text": "(Leng et al., 2021;", "ref_id": "BIBREF14" }, { "start": 281, "end": 301, "text": "Zhang et al., 2021a)", "ref_id": "BIBREF42" }, { "start": 435, "end": 453, "text": "Pino et al., 2020;", "ref_id": "BIBREF24" }, { "start": 454, "end": 473, "text": "Zheng et al., 2021;", "ref_id": "BIBREF48" }, { "start": 474, "end": 491, "text": "Li et al., 2020b;", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "The Speech-to-text track", "sec_num": "3.1.2" }, { "text": "The En\u2192Es track received submissions from 7 teams. The latency-quality trade-off results of the En-Es track are plotted in Figure 2 and the ranking is listed in Table 6 . According to the system descriptions submitted, almost all teams used the same training policies in this track as in the Zh\u2192En text-to-text track.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 161, "end": 168, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "English-Spanish Simultaneous Translation", "sec_num": "3.2" }, { "text": "We first carry out experiments to compare different translation quality evaluation metrics (Section 4.1), then discuss a controversial ranking dilemma of I-MOS algorithm in the ranking algorithm (Section 4.2). Table 7 : Sentence-level agreement with human ratings on 6 ST systems. Given 6 source documents, each system (SYSi) performs ST, and the translation results are evaluated by sentenceBLEU (sentBLEU), BertScore, and BLEURT with 4 references. We calculate the Pearson correlation (r), the Spearman correlation (\u03c1), and the Kendall Tau (\u03c4 ) score between the automatic metrics and human ratings. BLEURT has obvious advantages over the other two metrics in all the 6 systems.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 217, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Recently, many quality estimation metrics have been proposed to better imitate human evaluation (Specia et al., 2021) , such as RUSE (Shimanaka et al., 2018) , YiSi (Mathur et al., 2019) , BERTScore , BLEURT (Sellam et al., 2020) , etc. These metrics are proven to be superior to traditional quality evaluation metrics like BLEU (Papineni et al., 2002) in text translation. However, to the best of our knowledge, no work has conducted experiments in the ST scenario, and almost all ST work still takes BLEU as the criterion for translation quality evaluation.", "cite_spans": [ { "start": 96, "end": 117, "text": "(Specia et al., 2021)", "ref_id": null }, { "start": 133, "end": 157, "text": "(Shimanaka et al., 2018)", "ref_id": "BIBREF28" }, { "start": 165, "end": 186, "text": "(Mathur et al., 2019)", "ref_id": "BIBREF20" }, { "start": 208, "end": 229, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF26" }, { "start": 329, "end": 352, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "BLEU, BERTScore, and BLEURT", "sec_num": "4.1" }, { "text": "To keep consistent with previous work, we still used the document-level BLEU 7 for evaluation in the shared task this year. Now we conduct experiments to compare it with sentence-level BLEU, BERTScore 8 , and BLEURT 9 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU, BERTScore, and BLEURT", "sec_num": "4.1" }, { "text": "To evaluate the SOTA quality estimation metrics, we ask human annotators to assess the results of multiple ST systems and calculate the agreement between automatic metrics and human ratings. Each sentence is rated to 1, 2, or 3. 1 denotes the translation is inconsistent with the original text, or incomprehensible; 2 denotes the translation conveys the main idea of the original text but with minor mistakes in grammar or word usage; 3 denotes the translation is fully consistent with the original text. In order to ensure uniform rating standard, all evaluated sentences are scored by one annotator first, and then checked by another annotator. The two annotators are both translators who graduated from Chinese-English translation major. We randomly select 6 documents (including 975 source sentences in total) from the testset of the first track for evaluation, and then select 6 ST systems with high BLEU scores on this testset (SYS1: 30.23, SYS2: 30.35, SYS3: 29.38, SYS4: 33.45, SYS5: 42.05, SYS6: 41.27 ) and have they manually rated. Given the simultaneous translation result produced by 6 systems, we calculate the Pearson correlation (r), the Spearman correlation (\u03c1), and the Kendall Tau (\u03c4 ) points between human ratings and scores of different automatic metrics. As shown in Table 7 , BLEURT has a higher correlation with human ratings compared with the other two metrics in all the 6 systems.", "cite_spans": [ { "start": 947, "end": 959, "text": "SYS2: 30.35,", "ref_id": null }, { "start": 960, "end": 972, "text": "SYS3: 29.38,", "ref_id": null }, { "start": 973, "end": 985, "text": "SYS4: 33.45,", "ref_id": null }, { "start": 986, "end": 998, "text": "SYS5: 42.05,", "ref_id": null }, { "start": 999, "end": 1010, "text": "SYS6: 41.27", "ref_id": null } ], "ref_spans": [ { "start": 1289, "end": 1296, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Agreement between automatic metrics and human ratings", "sec_num": "4.1.1" }, { "text": "Next, we explore these metrics from the perspective of ranking. Taking the average score of all the evaluated sentences as the ranking basis, we wonder whether each metric would yield a ranking consistent with human evaluations. We first count the proportion of sentences with a human rating of 2 or 3 as the acceptability for each system. Figure 3 shows that the rank (horizontal axis) of the six systems in terms of acceptability, from 7 https://github.com/moses-smt/mosesdecoder/ blob/master/scripts/generic/mteval-v13a.pl Metrics r(\u2191) \u03c1(\u2191) \u03c4 (\u2191) DocBLEU 0.917 0.771 0.600 SentBLEU 0.970 0.886 0.733 BERTScore 0.968 0.886 0.733 BLEURT 0.994 1.000 1.000 Table 8 : Document-level agreement with human ratings.", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 348, "text": "Figure 3", "ref_id": "FIGREF4" }, { "start": 656, "end": 663, "text": "Table 8", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Using different metrics for ranking", "sec_num": "4.1.2" }, { "text": "low to high is: SYS1 < SYS2 < SYS3 < SYS4 < SYS5 < SYS6. Comparing the human-rated acceptability scores and the quality estimated by automatic metrics, we find that Document-level BLEU (DocBLEU) and Sentence-level BLEU (sentBLEU) score SYS3 inferior to SYS2, BERTScore rates SYS2 inferior to SYS1, and all the three metrics rank SYS6 inferior to SYS5. The ranking results of all the three metrics are different from those given by the human-rated acceptability. On the contrary, BLEURT's ranking for the 6 systems is consistent with the human results, indicating its higher accuracy in imitating human judgment. Note that, BERTScore rates all systems around 0.98, with no significant differences. This might be caused by the collapse problem (Chen and He, 2021; , meaning that BERT-derived representations are somehow collapsed, so that almost all sentences are mapped to a similar representation and produce high similarity. Table 9 : The correlation between human ratings and BLEURT scores, before and after fine-tuning.", "cite_spans": [ { "start": 742, "end": 761, "text": "(Chen and He, 2021;", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 926, "end": 933, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Using different metrics for ranking", "sec_num": "4.1.2" }, { "text": "systems, demonstrating the superiority of BLEURT to all the other three metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using different metrics for ranking", "sec_num": "4.1.2" }, { "text": "We further attempt to improve the performance of BLEURT by fine-tuning on some human ratings. We first construct a quality estimation training set consisting of 975 \u00d7 3 \u00d7 4 = 11700 triples built by pairing the ST results (hypo) and human ratings (score) of three systems (SYS1, SYS2, and SYS4) with corresponding 4 references (ref). Then we fine-tune BLEURT on this training set and evaluate its performance on the remaining three systems. Here we use BLEURT-Base 10 for faster training. The improvements brought by fine-tuning is shown in Table 9 . After fine-tuning, the correlation of almost all systems has been significantly improved, especially for SYS5 and SYS6.", "cite_spans": [], "ref_spans": [ { "start": 559, "end": 566, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Fine-tuning BLEURT on human annotations", "sec_num": "4.1.3" }, { "text": "In the shared task, we take the I-MOS algorithm for ranking. It iteratively builds a monotonic optimal sequence (MOS) and considers the proportion of optimal points as the ranking basis. On the qualitylatency figure, the MOS is a sequence of optimal points with increasing translation quality and latency, and a point is considered optimal if there is no other point or line above it at an identical latency. Although I-MOS is adaptive to uncertain submission results, it has one drawback, that is, the MOS curve is bound to select the leftmost point regardless of its translation quality, because the leftmost point is definitely an optimal point. Therefore, I-MOS somehow encourages participants to submit only one point with extremely low latency, making the team ranked first place by the I-MOS algorithm, the leftmost point of Figure 2 is such a case. Figure 4 : An example to illustrate the ranking dilemma of the I-MOS ranking algorithm. The vanilla I-MOS algorithm calculates MOS-1 as the yellow dotted curve (V1). According to V1, Team2 would rank higher than Team1, although its left two points are unconvincing because of their extremely low quality. After applying our proposed remedy, the left two points of Team2 are removed and Team1 ranks higher based on the modified MOS-1(V2).", "cite_spans": [], "ref_spans": [ { "start": 832, "end": 840, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 857, "end": 865, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "The ranking dilemma", "sec_num": "4.2" }, { "text": "To eliminate the defect of I-MOS, we propose to add two strategies to future shared tasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ranking dilemma", "sec_num": "4.2" }, { "text": "1. We require each team to submit at least two points with different delays to make a latencyquality trade-off.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ranking dilemma", "sec_num": "4.2" }, { "text": "2. Before running the I-MOS algorithm, we first scan to remove the leftmost points whose quality is worse than others' submissions. If all submission points of a team are removed, the team will be ranked last.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ranking dilemma", "sec_num": "4.2" }, { "text": "See Figure 4 for example. The vanilla I-MOS algorithm would generate the dashed curve as MOS-1 (V1), causing Team2 to rank higher (Team1 scores 3/4, Team2 scores 3/3), although its left two points are unconvincing due to their extremely low quality. But after applying this strategy, we will remove the two points of Team2 because no other team has points with inferior quality compared to them. Then Team2 will be scored to 1/3. We don't have to worry whether this strategy will lead to unfairness if Team2 is designed for ST at low latency. If Team2 doesn't deliberately take advantage of the defect of I-MOS, they should submit more results at higher latency, at least submit their full-sentence translation result.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "The ranking dilemma", "sec_num": "4.2" }, { "text": "This paper presents the results of the simultaneous translation shared task we hosted at the 3 rd Workshop on Automatic Simultaneous Translation work-shop. The shared task includes three tracks, two text-to-text tracks in different languages, and one speech-to-text track. We analyze the submissions from 14 participating teams and have the following inspirations for future ST work:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "1. Robust ST model: The results of the first two tracks reveal there exists a great gap between using speech input and its corresponding golden transcriptions. Therefore, it is important to explore robust speech translation systems in real ST scenes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "2. End-to-end ST: In the speech-to-text track, we received an end-to-end ST submission system for the first time in three years. It integrates a read-write policy into an end-toend speech translation model and outperforms all the cascaded systems, representing the potential of end-to-end simultaneous translation models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "3. Quality Evaluation: Although recently proposed neural network-based metrics are proven superior to BLEU for standard text translation, ST work always takes BLEU for quality estimation. We compare multiple metrics under the ST scenario and verify that BLEURT is more suitable than BLEU for ST in terms of correlation with human ratings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "We propose the I-MOS algorithm as well as its revised version for system ranking. Considering both quality and latency is crucial for a practical ST system. However, the quality-latency metric for ST systems is rarely studied. We suggest further study on this topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Evaluation:", "sec_num": "4." }, { "text": "In future shared tasks, we will make the following changes: 1. Submission: Add a requirement that each submission should contain at least two points with different delays to make a latency-quality trade-off.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Evaluation:", "sec_num": "4." }, { "text": "2. Criterion: Use BLEURT to replace BLEU for its better correlation with human ratings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Evaluation:", "sec_num": "4." }, { "text": "3. Ranking: Removing the leftmost points whose quality is worse than others' submission before running the I-MOS algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Evaluation:", "sec_num": "4." }, { "text": "This outlier is caused by a missing translation (one sentence generates an empty translation). Different from BERTScore, SentBLEU and BLEURT are less influenced because the BERTscores are relatively high (always higher than 0.9), for which one zero brought by empty translation would largely degrade its Pearson correlation score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "authors": [ { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Rishita", "middle": [], "last": "Sundaram Ananthanarayanan", "suffix": "" }, { "first": "Jingliang", "middle": [], "last": "Anubhai", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Battenberg", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Case", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Catanzaro", "suffix": "" }, { "first": "Guoliang", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "In International conference on machine learning", "volume": "", "issue": "", "pages": "173--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guo- liang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In In- ternational conference on machine learning, pages 173-182. PMLR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Monotonic infinite lookback attention for simultaneous machine translation", "authors": [ { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1313--1323", "other_ids": { "DOI": [ "10.18653/v1/P19-1126" ] }, "num": null, "urls": [], "raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruom- ing Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313-1323, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pre-training on high-resource speech recognition improves lowresource speech-to-text translation", "authors": [ { "first": "Sameer", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Kamper", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018. Pre-training on high-resource speech recognition improves low- resource speech-to-text translation.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline", "authors": [ { "first": "Hui", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Jiayu", "middle": [], "last": "Du", "suffix": "" }, { "first": "Xingyu", "middle": [], "last": "Na", "suffix": "" }, { "first": "Bengu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2017, "venue": "the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng. 2017. Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline. In 2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA), pages 1-5. IEEE.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Exploring simple siamese representation learning", "authors": [ { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "15750--15758", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750-15758.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Can neural machine translation do simultaneous translation?", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Masha", "middle": [], "last": "Esipova", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.02012" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho and Masha Esipova. 2016. Can neu- ral machine translation do simultaneous translation? arXiv preprint arXiv:1606.02012.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Incremental decoding and training methods for simultaneous translation in neural machine translation", "authors": [ { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "493--499", "other_ids": { "DOI": [ "10.18653/v1/N18-2079" ] }, "num": null, "urls": [], "raw_text": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, and Stephan Vogel. 2018. Incremental decoding and training methods for simultaneous translation in neural ma- chine translation. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 493-499, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Enhancing transformer for end-to-end speech-to-text translation", "authors": [ { "first": "Mattia Antonino Di", "middle": [], "last": "Gangi", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Roldano", "middle": [], "last": "Cattoni", "suffix": "" }, { "first": "Dessi", "middle": [], "last": "Roberto", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2019, "venue": "Machine Translation Summit XVII", "volume": "", "issue": "", "pages": "21--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mattia Antonino Di Gangi, Matteo Negri, Roldano Cat- toni, Dessi Roberto, and Marco Turchi. 2019. En- hancing transformer for end-to-end speech-to-text translation. In Machine Translation Summit XVII, pages 21-31. European Association for Machine Translation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Understanding back-translation at scale", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.09381" ] }, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient wait-k models for simultaneous machine translation", "authors": [ { "first": "Maha", "middle": [], "last": "Elbayad", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Besacier", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Verbeek", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2019. Efficient wait-k models for simultaneous ma- chine translation. In Interspeech.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Faustino", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "369--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning to translate in real-time with neural machine translation", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1053--1062", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- tor OK Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053-1062.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Revisiting self-training for neural sequence generation", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.13788" ] }, "num": null, "urls": [], "raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. arXiv preprint arXiv:1909.13788.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Leveraging weakly supervised data to improve end-to-end speech-to-text translation", "authors": [ { "first": "Ye", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Ron", "middle": [ "J" ], "last": "Weiss", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Ari", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Laurenzo", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7180--7184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, and Yonghui Wu. 2019. Leverag- ing weakly supervised data to improve end-to-end speech-to-text translation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7180-7184. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Fastcorrect 2: Fast error correction on multiple candidates for automatic speech recognition", "authors": [ { "first": "Yichong", "middle": [], "last": "Leng", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Linchen", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Linquan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Xiang-Yang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.14420" ] }, "num": null, "urls": [], "raw_text": "Yichong Leng, Xu Tan, Rui Wang, Linchen Zhu, Jin Xu, Wenjie Liu, Linquan Liu, Tao Qin, Xiang-Yang Li, Edward Lin, et al. 2021. Fastcorrect 2: Fast error cor- rection on multiple candidates for automatic speech recognition. arXiv preprint arXiv:2109.14420.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards fast and accurate streaming end-toend asr", "authors": [ { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Shuo-Yiin Chang", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Sainath", "suffix": "" }, { "first": "Yanzhang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "He", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Strohman", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6069--6073", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Li, Shuo-yiin Chang, Tara N Sainath, Ruoming Pang, Yanzhang He, Trevor Strohman, and Yonghui Wu. 2020a. Towards fast and accurate streaming end-to- end asr. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP), pages 6069-6073. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multilingual speech translation with efficient finetuning of pretrained models", "authors": [ { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Chau", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.12829" ] }, "num": null, "urls": [], "raw_text": "Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2020b. Multilingual speech trans- lation with efficient finetuning of pretrained models. arXiv preprint arXiv:2010.12829.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "System description on automatic simultaneous translation workshop", "authors": [ { "first": "Zecheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Haoze", "middle": [], "last": "Li", "suffix": "" } ], "year": 2022, "venue": "The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zecheng Li, Yue Sun, and Haoze Li. 2022. System description on automatic simultaneous translation workshop. In The 3rd Workshop on Automatic Simul- taneous Translation at NAACL 2022.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "BIT-xiaomi's system for autosimtrans 2022", "authors": [ { "first": "Mengge", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yanzhi", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Tianwei", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Silin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuhang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2022, "venue": "The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengge Liu, Xiang Li, Bao Chen, Yanzhi Tian, Tianwei Lan, Silin Li, Yuhang Guo, Jian Luan, and Bin Wang. 2022. BIT-xiaomi's system for autosimtrans 2022. In The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "STACL: simultaneous translation with integrated anticipation and controllable latency", "authors": [ { "first": "Mingbo", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Kaibo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hairong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, and Haifeng Wang. 2019. STACL: simultaneous translation with integrated anticipation and control- lable latency. In ACL 2019, volume abs/1810.08398.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Putting evaluation in context: Contextual embeddings improve machine translation evaluation", "authors": [ { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2799--2808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting evaluation in context: Contextual em- beddings improve machine translation evaluation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2799- 2808.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Streaming automatic speech recognition with the transformer model", "authors": [ { "first": "Niko", "middle": [], "last": "Moritz", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Hori", "suffix": "" }, { "first": "Jonathan", "middle": [ "Le" ], "last": "", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6074--6078", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niko Moritz, Takaaki Hori, and Jonathan Le. 2020. Streaming automatic speech recognition with the transformer model. In ICASSP 2020-2020 IEEE In- ternational Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), pages 6074-6078. IEEE.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Optimizing segmentation strategies for simultaneous speech translation", "authors": [ { "first": "Yusuke", "middle": [], "last": "Oda", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Sakriani", "middle": [], "last": "Sakti", "suffix": "" }, { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Nakamura", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "551--556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimizing seg- mentation strategies for simultaneous speech transla- tion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), volume 2, pages 551-556.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Self-training for end-to-end speech translation", "authors": [ { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" }, { "first": "Qiantong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xutai", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Mohammad", "middle": [ "Javad" ], "last": "Dousti", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.02490" ] }, "num": null, "urls": [], "raw_text": "Juan Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-training for end-to-end speech translation. arXiv preprint arXiv:2006.02490.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "An algorithm for determining the endpoints of isolated utterances", "authors": [ { "first": "R", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "", "middle": [], "last": "Rabiner", "suffix": "" }, { "first": "", "middle": [], "last": "Marvin R Sambur", "suffix": "" } ], "year": 1975, "venue": "Bell System Technical Journal", "volume": "54", "issue": "2", "pages": "297--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence R Rabiner and Marvin R Sambur. 1975. An algorithm for determining the endpoints of isolated utterances. Bell System Technical Journal, 54(2):297- 315.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Bleurt: Learning robust metrics for text generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur P", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.04696" ] }, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. Bleurt: Learning robust metrics for text gener- ation. arXiv preprint arXiv:2004.04696.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06709" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Ruse: Regressor using sentence embeddings for automatic machine translation evaluation", "authors": [ { "first": "Hiroki", "middle": [], "last": "Shimanaka", "suffix": "" }, { "first": "Tomoyuki", "middle": [], "last": "Kajiwara", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "751--758", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. Ruse: Regressor using sentence embeddings for automatic machine translation eval- uation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 751-758.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Vishrav Chaudhary, and Andr\u00e9 Martins. 2021. Findings of the wmt 2021 shared task on quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Chrysoula", "middle": [], "last": "Zerva", "suffix": "" }, { "first": "Zhenhao", "middle": [], "last": "Li", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, and Andr\u00e9 Martins. 2021. Findings of the wmt 2021 shared task on quality estimation. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Silero vad: pre-trained enterprisegrade voice activity detector (vad), number detector and language classifier", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silero Team. 2021. Silero vad: pre-trained enterprise- grade voice activity detector (vad), number detector and language classifier.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Low latency end-to-end streaming speech recognition with a scout network", "authors": [ { "first": "Chengyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jinyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Guoli", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10369" ] }, "num": null, "urls": [], "raw_text": "Chengyi Wang, Yu Wu, Shujie Liu, Jinyu Li, Liang Lu, Guoli Ye, and Ming Zhou. 2020a. Low latency end-to-end streaming speech recognition with a scout network. arXiv preprint arXiv:2003.10369.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Bridging the gap between pretraining and fine-tuning for end-to-end speech translation", "authors": [ { "first": "Chengyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zhenglu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "9161--9168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020b. Bridging the gap between pre- training and fine-tuning for end-to-end speech trans- lation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9161-9168.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Thchs-30: A free chinese speech corpus", "authors": [ { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xuewei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1512.01882" ] }, "num": null, "urls": [], "raw_text": "Dong Wang and Xuewei Zhang. 2015. Thchs-30: A free chinese speech corpus. arXiv preprint arXiv:1512.01882.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural simultaneous speech translation using alignment-based chunking", "authors": [ { "first": "Patrick", "middle": [], "last": "Wilken", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Alkhouli", "suffix": "" }, { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Golik", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 17th International Conference on Spoken Language Translation", "volume": "", "issue": "", "pages": "237--246", "other_ids": { "DOI": [ "10.18653/v1/2020.iwslt-1.29" ] }, "num": null, "urls": [], "raw_text": "Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, and Pavel Golik. 2020. Neural simultaneous speech trans- lation using alignment-based chunking. In Proceed- ings of the 17th International Conference on Spoken Language Translation, pages 237-246, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Proceedings of the First Workshop on Automatic Simultaneous Translation", "authors": [ { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Collin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "James", "middle": [], "last": "Cross", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua Wu, Collin Cherry, Liang Huang, Zhongjun He, Mark Liberman, James Cross, and Yang Liu, edi- tors. 2020. Proceedings of the First Workshop on Automatic Simultaneous Translation. Association for Computational Linguistics, Seattle, Washington.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "R-drop: regularized dropout for neural networks", "authors": [ { "first": "Lijun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Juntao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2021, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. R-drop: regularized dropout for neural networks. Advances in Neural Information Processing Systems, 34.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Consert: A contrastive framework for self-supervised sentence representation transfer", "authors": [ { "first": "Yuanmeng", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Rumei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sirui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Fuzheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Weiran", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2105.11741" ] }, "num": null, "urls": [], "raw_text": "Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consert: A con- trastive framework for self-supervised sentence repre- sentation transfer. arXiv preprint arXiv:2105.11741.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Realtrans: End-to-end simultaneous speech translation with convolutional weighted-shrinking transformer", "authors": [ { "first": "Xingshan", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Liangyou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.04833" ] }, "num": null, "urls": [], "raw_text": "Xingshan Zeng, Liangyou Li, and Qun Liu. 2021. Re- altrans: End-to-end simultaneous speech translation with convolutional weighted-shrinking transformer. arXiv preprint arXiv:2106.04833.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "End-to-end simultaneous speech translation with pretraining and distillation: Huawei noah's system for autosimtrans 2022", "authors": [ { "first": "Xingshan", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Liangyou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2022, "venue": "The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xingshan Zeng, Pengfei Li, Liangyou Li, and Qun Liu. 2022. End-to-end simultaneous speech translation with pretraining and distillation: Huawei noah's sys- tem for autosimtrans 2022. In The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Learning adaptive segmentation policy for end-to-end simultaneous translation", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2022, "venue": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "7862--7874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2022. Learning adaptive segmentation policy for end-to-end simultaneous translation. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 7862-7874, Dublin, Ireland. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Correcting Chinese spelling errors with phonetic pre-training", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "2250--2261", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.198" ] }, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Chao Pang, Chuanqiang Zhang, Shuo- huan Wang, Zhongjun He, Yu Sun, Hua Wu, and Haifeng Wang. 2021a. Correcting Chinese spelling errors with phonetic pre-training. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 2250-2261, Online. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "BSTC: A large-scale Chinese-English speech translation dataset", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiyang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qinfei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Second Workshop on Automatic Simultaneous Translation", "volume": "", "issue": "", "pages": "28--35", "other_ids": { "DOI": [ "10.18653/v1/2021.autosimtrans-1.5" ] }, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Xiyang Wang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Zhi Li, Haifeng Wang, Ying Chen, and Qinfei Li. 2021b. BSTC: A large-scale Chinese-English speech translation dataset. In Pro- ceedings of the Second Workshop on Automatic Si- multaneous Translation, pages 28-35, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Learning adaptive segmentation policy for simultaneous translation", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2280--2289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2020. Learning adaptive segmentation policy for simultaneous translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2280-2289.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Findings of the second workshop on automatic simultaneous translation", "authors": [ { "first": "Ruiqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuanqiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Second Workshop on Automatic Simultaneous Translation", "volume": "", "issue": "", "pages": "36--44", "other_ids": { "DOI": [ "10.18653/v1/2021.autosimtrans-1.6" ] }, "num": null, "urls": [], "raw_text": "Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2021c. Findings of the sec- ond workshop on automatic simultaneous translation. In Proceedings of the Second Workshop on Automatic Simultaneous Translation, pages 36-44, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Kilian", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.09675" ] }, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "System description on third automatic simultaneous translation workshop", "authors": [ { "first": "Yiqiao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2022, "venue": "The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiqiao Zhang. 2022. System description on third auto- matic simultaneous translation workshop. In The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation", "authors": [ { "first": "Renjie", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Junkun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mingbo", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2021, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "12736--12746", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renjie Zheng, Junkun Chen, Mingbo Ma, and Liang Huang. 2021. Fused acoustic and text encoding for multimodal bilingual pretraining and speech transla- tion. In International Conference on Machine Learn- ing, pages 12736-12746. PMLR.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "USST's system for autosimtrans 2022", "authors": [ { "first": "Jiahui", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2022, "venue": "The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "JiaHui Zhu and Jun Yu. 2022. USST's system for au- tosimtrans 2022. In The 3rd Workshop on Automatic Simultaneous Translation at NAACL 2022.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": ". Zh-En Text-to-text ST track (b). Zh-En Speech-to-text ST track", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Figure 1: The evaluation results of the first two tracks. The order in the legend (line by line) denotes the ranking result, which is calculated by the I-MOS algorithm. It iteratively builds the monotonic optimal sequence (MOS) of level k (MOS-k) and classifies teams that have points on it to the k th level. We use points of the same color but different shapes to represent the results of teams belonging to the same level, and the teams are ranked according to the proportion of points on the corresponding MOS.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "The evaluation results of the En-Es text-to-text ST track.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "https://github.com/Tiiiger/bert_score based on roberta-large 9 https://github.com/google-research/bleurt with BLEURTHuman-rated acceptability vs. automatic metrics for the translation of 6 systems.", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "10 https://storage.googleapis.com/bleurt-oss/ bleurt-base-128.zip", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "text": "ECUST Univ. of Shanghai for Science and Technology & East China Univ. of Science", "content": "
TeamOrganization
BIT-XiaomiBeijing Institute of Technology & Xiaomi Inc., Beijing, China
HuaweiHuawei Noah's Ark Lab, Guangdong, China
HAUHuazhong Agricultural University, Hubei, China
USST-HZLHZAnonymous
ZXNZhejiang Univ. & Xiamen Univ. & North China Institute of Aerospace Engineering
TMUTianjin Medical University, Tianjin, China
CITCChangchun Information Technology College, Jilin, China
NCIAENorth China Institute of Aerospace Engineering, Hebei, China
XJTUXi'an Jiaotong University, Shanxi, China
HITHarbin Institute of Technology, Heilongjiang, China
ZJUZhejiang University, Zhejiang, China
NuctechNuctech Company, Beijing, China
A23Anonymous
en/
1
", "num": null, "html": null }, "TABREF1": { "type_str": "table", "text": "", "content": "", "num": null, "html": null }, "TABREF4": { "type_str": "table", "text": "The ranking of the Zh\u2192En text-to-text ST track. The scores are calculated according to the I-MOS algorithm.", "content": "
", "num": null, "html": null }, "TABREF6": { "type_str": "table", "text": "", "content": "
", "num": null, "html": null }, "TABREF8": { "type_str": "table", "text": "", "content": "
", "num": null, "html": null }, "TABREF10": { "type_str": "table", "text": "", "content": "
further lists the correlation between the
automatic metrics and human acceptability for the 6
", "num": null, "html": null } } } }