ACL-OCL / Base_JSON /prefixA /json /autosimtrans /2021.autosimtrans-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
101 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:09:35.136894Z"
},
"title": "Findings of the Second Workshop on Automatic Simultaneous Translation",
"authors": [
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc. No",
"location": {
"addrLine": "10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "zhangruiqing01@baidu.com"
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc. No",
"location": {
"addrLine": "10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "zhangchuanqiang@baidu.com"
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc. No",
"location": {
"addrLine": "10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "hezhongjun@baidu.com"
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc. No",
"location": {
"addrLine": "10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "wu_hua@baidu.com"
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc. No",
"location": {
"addrLine": "10, Shangdi 10th Street",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the results of the shared task of the 2nd Workshop on Automatic Simultaneous Translation (AutoSimTrans). The task includes two tracks, one for text-to-text translation and one for speech-to-text, requiring participants to build systems to translate from either the source text or speech into the target text. Different from traditional machine translation, the AutoSimTrans shared task evaluates not only translation quality but also latency. We propose a metric \"Monotonic Optimal Sequence\" (MOS) considering both quality and latency to rank the submissions. We also discuss some important open issues in simultaneous translation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the results of the shared task of the 2nd Workshop on Automatic Simultaneous Translation (AutoSimTrans). The task includes two tracks, one for text-to-text translation and one for speech-to-text, requiring participants to build systems to translate from either the source text or speech into the target text. Different from traditional machine translation, the AutoSimTrans shared task evaluates not only translation quality but also latency. We propose a metric \"Monotonic Optimal Sequence\" (MOS) considering both quality and latency to rank the submissions. We also discuss some important open issues in simultaneous translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Simultaneous translation is to translate concurrently with the speech in the source language, aiming to obtain high translation quality with low latency. The concurrent comprehension and production process makes simultaneous translation an extremely challenging task for both human experts and machines. As a combination of machine translation (MT), automatic speech recognition (ASR), and text-to-speech synthesis (TTS), simultaneous translation still facing many problems to be studied in the research and application. To promote the development in this cutting-edge field, we conduct a shared task at the 2nd Workshop on Automatic Simultaneous Translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This year, we focus on Chinese-English simultaneous translation and set up two tracks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Text-to-text track, where the participants are asked to submit systems that translate streaming input text in real-time. The input of this track is human-annotated transcripts in streaming format, in which every n-word sentence is broken into n lines of sequences whose length ranges from 1 to n, incremented by 1. We set up this track for two reasons. On the one hand, the difficulty of the task is reduced by removing the recognition of speech. On the other hand, participants can focus on text processing, such as segmentation and translation, without being influenced by ASR errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Speech-to-text track, where the submitted systems need to produce a real-time translation of the given audio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide BSTC (Zhang et al., 2021 ) (Baidu Speech Translation Corpus) as the training data, which consists of about 68 hours of Mandarin speeches, together with corresponding transcripts, ASR results, and translations. In addition, participants can also use bilingual corpus provided by CCMT (China Conference on Machine Translation) 1 . We will describe the data in detail in Section 2.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "(Zhang et al., 2021",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One objective of the shared task is to explore the performance of state-of-the-art simultaneous translation systems. Traditional evaluation metrics, such as BLEU, only measure the translation quality, while recently proposed metrics, such as Consecutive Wait (CW) (Gu et al., 2017) and Average Lagging (AL) (Ma et al., 2019) focus on latency. So far as we know, there is no metric that evaluates both quality and delay.",
"cite_spans": [
{
"start": 264,
"end": 281,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 307,
"end": 324,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We ask the participants to submit systems under different configurations to produce multiple translation results with varying latency. Then we plot each result in a quality-latency coordinate. Normally, a system is regarded as the best if all of its points are above others (Figure 1(a) ). However, in most cases, their lines of points intersect with each other (Figure 1(b) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 286,
"text": "(Figure 1(a)",
"ref_id": "FIGREF0"
},
{
"start": 362,
"end": 374,
"text": "(Figure 1(b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To consider both quality and latency in ranking, we propose a ranking metric, Monotonic Optimal Sequence (MOS) (Section 3). The idea is to first find all the optimal points, that is, a group of points with the highest quality under different latency, and then calculate the proportion of a system's optimal points in all its submitted points. The higher the proportion, the better the performance. We received six submissions from four teams this year. We will report the results and analysis in Section 4. We discuss some important open issues in Section 5 and conclude the paper in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first introduce the data sets used in the shared task and the setup of the two tracks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task",
"sec_num": "2"
},
{
"text": "Due to the scarcity of Zh\u2192En speech translation corpora, we provide a Zh\u2192En speech translation dataset BSTC and a large-scale text translation corpus CCMT for the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set",
"sec_num": "2.1"
},
{
"text": "\u2022 BSTC (Zhang et al., 2021) ",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Zhang et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set",
"sec_num": "2.1"
},
{
"text": "Translation Corpus) is a 68-hour Zh\u2192En speech translation data including 215 speeches for training. Each speech is segmented into sentences, transcribed, and translated into English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Baidu Speech",
"sec_num": null
},
{
"text": "\u2022 We also encourage participants to use the large-scale Zh\u2192En text translation corpus CCMT 2020 (China Conference on Machine Translation) to enhance the performance of machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Baidu Speech",
"sec_num": null
},
{
"text": "The statistics of the two datasets are listed in Table 1 . As far as we know, BSTC is by far the largest Zh\u2192En speech translation corpus, but it is still insufficient to train either a well-performed ASR model or an end-to-end simultaneous translation model in the speech-to-text track. Therefore, we don't impose restrictions on the dataset used by the participants for the speech track.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "(Baidu Speech",
"sec_num": null
},
{
"text": "Notice that the test set of BSTC shown in Table 1 is not released. The participants are required to submit docker systems, which will be tested on the 1.5-hours test set by us. The test set is kept confidential as a progress test set. To validate the system to submit, we provide the dev set to the participants, which has the same format as the test set. It contains four-way parallel samples of 1) the streaming transcript, 2) the streaming asr, 3) the sentence-level translation of the transcript, and 4) the audio. The streaming transcripts are produced by turning each n-word (a word means a Chinese character here) sentence to n lines of word sequences with length 1, 2, ..., n. And the streaming ASR is produced by the real-time Baidu ASR system based on SMLTA 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Test Set",
"sec_num": "2.2"
},
{
"text": "We set two tracks in our shared task, the text-totext track is to input streaming transcripts and the speech-to-text track is to input audio files, as mentioned in section 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Tracks",
"sec_num": "2.3"
},
{
"text": "The simultaneous translation aims to balance system delay and translation quality. The key problem is to explore a policy that decides when to begin translating a source sentence before the speaker has finished his/her utterance. Eager policies, such as translating every word when it is received, will lead to poor translation quality, while lazy policies, such as waiting to translate until receiving a complete sentence, will result in long system delay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Tracks",
"sec_num": "2.3"
},
{
"text": "In order to comprehensively evaluate each system's performance, we suggest that the participants generate multiple results on varying latency. Six systems from four teams were submitted in the shared task, four to Track 1 and two to Track 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Tracks",
"sec_num": "2.3"
},
{
"text": "Unlike text translation evaluation that only takes one indicator (i.e., translation quality), simultaneous translation evaluation needs to consider quality and latency at the same time. The evaluation based on two criteria brings difficulties to ranking the systems. However, the two indicators are not easy to merge into one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Evaluation",
"sec_num": "3"
},
{
"text": "To rank the submissions better, we propose a ranking algorithm called Iterative Monotonic Optimal Sequence (I-MOS). Specifically, we define an optimal point as the result of the best translation quality at each latency. Our algorithm iteratively finds sets of optimal points to construct an optimal curve called Monotonic Optimal Sequence (MOS), then each team's proportion of points on the MOS curve is calculated to measure the performance. The overall process is illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 481,
"end": 489,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "System Evaluation",
"sec_num": "3"
},
{
"text": "In the following sections, we first introduce the commonly used metrics of quality and latency (Section 3.1), then propose the Monotonic Optimal Sequence (Section 3.2) and elaborate our I-MOS algorithm (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Evaluation",
"sec_num": "3"
},
{
"text": "In simultaneous translation, quality is often measured by BLEU (Papineni et al., 2002) . Recent work proposed some metrics for latency evaluation, such as Average Proportion (AP) (Cho and Esipova, 2016) , Consecutive Wait (CW) (Gu et al., 2017) , Average Lagging (AL) (Ma et al., 2019) and Differentiable Average Lagging (DAL) (Arivazhagan et al., 2019 ). Here we briefly introduce the two latency metrics used in our evaluation:",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 179,
"end": 202,
"text": "(Cho and Esipova, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 227,
"end": 244,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 268,
"end": 285,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 327,
"end": 352,
"text": "(Arivazhagan et al., 2019",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.1"
},
{
"text": "\u2022 CW is the average source segment length in words. It measures the number of source words being waited for between each two translation actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.1"
},
{
"text": "\u2022 AL quantifies the degree the audience is out of sync with the speaker by the average number of source words that the audience lags behind the ideal policy, in which the translation of each sentence is output at the same speed as the speaker's utterance and the entire translation finished when the speaker completes his/her utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.1"
},
{
"text": "Note that the above-mentioned latency metrics are all proposed for text-to-text simultaneous translation and we use AL in the text track for latency evaluation. Some work extended AP and AL to speech translation (Ren et al., 2020; Ma et al., 2020 ), but we don't use them because they measure real-time latency, while some submissions calling remote services contain network delay. It is unreasonable to use real-time latency metrics for both the local-running systems and remote-running systems. Thus we ignore the latency of the ASR model and take the metrics of text-to-text simultaneous translation in the speech track. Specifically, we use BLEU-AL evaluation in the Text-to-text track and BLEU-CW evaluation in the Speech-to-text track.",
"cite_spans": [
{
"start": 212,
"end": 230,
"text": "(Ren et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 231,
"end": 246,
"text": "Ma et al., 2020",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.1"
},
{
"text": "To comprehensively rank systems based on the translation quality and latency, we propose to construct a monotonic optimal sequence composed of Optimal Points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "Definition 1. On the quality-latency figure, one result is considered optimal if there is no other point or line above it at an identical latency. In this case, the result is of the highest translation quality at that latency and we define it as an Optimal Point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "For example, among the nine results of Figure 1 (b), the leftmost two points of Team1 and rightmost two points of Team2 are Optimal Points. The third point from left on Team2's curve is not optimal because it lies below the line of Team1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "To get Optimal Points, we select the results of the best translation quality with different latency. Since the submitted systems have discrete latency, we use the linear interpolation of adjacent points of each team to estimate their translation quality on continuous latency. Then we select some Optimal Points to form an optimal curve called Monotonic Optimal Sequence. Definition 2. Let Monotonic Optimal Sequence (MOS) be a sequence of Optimal Point with increasing translation quality and latency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "We arrange all the Optimal Points in ascending order of latency and then select the points with monotonously increasing translation quality to form the MOS. The monotonicity requirement for translation quality is to avoid outlier points. For example, the rightmost point of Team1 in Figure 2 (b) is an outlier because there is no point or line above this point at the same latency, but it doesn't follow the monotonicity principle, so it should not be added to MOS.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 292,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "We propose to use each team's proportion of points on the MOS to evaluate its performance. That is, we rank teams with:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S T i = N (p * t i )/N (p t i )",
"eq_num": "(1)"
}
],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "where N (p * t i ) and N (p t i ) denote the number of points on MOS and the number of submitted points of team i, respectively. Therefore, the maximum value of S T i is 1, when all of its submitted points are on the MOS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonic Optimal Sequence",
"sec_num": "3.2"
},
{
"text": "There exists a problem in our measurement that, according to Eq. 1, all the teams that have no points on the MOS are ranked tied because they all score zero. To tackle this problem, we propose the Iterative Monotonic Optimal Sequence (I-MOS) algorithm. The main idea is to iteratively calculate the MOS curves, MOS-1, MOS-2, ... MOS-K, in which MOS-k denotes the Monotonic Optimal Sequence of level k calculated at the k th iteration. All the systems that have at least one point on MOSk are classified to level k. We remove these systems and calculate MOS-(k + 1) in the next iteration. Each team of the k th level ranks higher than all teams of the (k + 1) th level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Monotonic Optimal Sequence Algorithm",
"sec_num": "3.3"
},
{
"text": "Our algorithm is elaborated in Algorithm 1. The level of all teams is initialized to zero (line 1), which denotes the team's score has not been calculated. Then we begin our iteration. While there exists at least one team whose score has not been calculated (line 4), we update the score of teams that belong to superior levels (level 1, 2, ..., k \u2212 1) teams by adding the maximum value of S T i (1 point) to them (line 5-7) to ensure the systems of level 1, 2, ...k \u2212 1 scores higher than systems of level k. Then we calculate MOS-k (line 8) and update the score of the teams that belong to level k according to Eq. 1 (line 9-11). After an iteration, we continue to explore teams that belong to level k +1 (line 12). Figure 2 provides a running process of I-MOS.",
"cite_spans": [],
"ref_spans": [
{
"start": 718,
"end": 726,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Iterative Monotonic Optimal Sequence Algorithm",
"sec_num": "3.3"
},
{
"text": "We received 6 systems submitted by four teams from four universities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "\u2022 Institute of computing technology, Chinese Academy of Science (ICT)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "\u2022 Xiamen University (XMU)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "\u2022 Beijing Institute of Technology (BIT)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "\u2022 Ping An Technology (Shenzhen) Co., Ltd. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "tl[i] \u2190 k 11 s[i] \u2190 N (p * t i )/N (p t i ) 12 k \u2190 k + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "We test each docker system with our testset, which contains 1.5 hours of 6 Mandarin talks. All the systems are run on V100 GPU. We plot the evaluation results in Figure 3 and rank them according to the I-MOS algorithm. Their ranking results are shown in Table 2 . We use BLEU 3 to evaluate the translation quality and use Average Lagging (AL) (Ma et al., 2019) and Consecutive Wait (CW) (Gu et al., 2017) as latency metrics.",
"cite_spans": [
{
"start": 343,
"end": 360,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 387,
"end": 404,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 254,
"end": 261,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Systems Results",
"sec_num": "4"
},
{
"text": "In the first track, the results of the four teams reflect their preference in balancing system latency and translation quality. We briefly describe the methods of the four teams below in the order of their ranks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text-to-text Track",
"sec_num": "4.1"
},
{
"text": "1. ICT proposes the character-level wait-k policy, rather than using the standard word-level wait-k (Ma et al., 2019) . They perform prefixto-prefix MT training as in the original work. Besides, they follow the multi-path (Elbayad et al., 2020) and future-guided (Zhang et al., 2020b) methods to enhance the predictability and avoid huge anticipation in translation 3 BLEU is calculated using \" https://github.com/mosessmt/mosesdecoder/blob/master/scripts/generic/mteval-v13a.pl\". caused by wait-k. The multi-path method adopts randomly sampled k in [1, 2, ..., K] in the training of incremental MT model to cover all possible k during training. And the future-guided method attempts to promote the prediction ability of the wait-k strategy. To improve the robustness of the MT model, they further try several data augmentation methods via adding noise to the source text.",
"cite_spans": [
{
"start": 100,
"end": 117,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 222,
"end": 244,
"text": "(Elbayad et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 263,
"end": 284,
"text": "(Zhang et al., 2020b)",
"ref_id": "BIBREF22"
},
{
"start": 550,
"end": 564,
"text": "[1, 2, ..., K]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text-to-text Track",
"sec_num": "4.1"
},
{
"text": "Team N (p * t i )/N (p t i ) Level 1 ICT 4/4 XMU 2/3 BIT 1/4 Level 2 PingAn 7/7 Track 2 Level 1 PingAn 1/1 XMU 1/3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Track 1 Team Level",
"sec_num": null
},
{
"text": "2. XMU follows the Meaningful Unit (MU) segmentation policy proposed in Zhang et al. (2020a) that uses a context-aware classification model to determine whether the currently received ASR content can be definitely translated. To generate consistent translation given the segmentation, the MT model of the pipeline system is used to automatically generate training data of meaningful units. The MT model is trained by full-sentences pairs.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "Zhang et al. (2020a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 1 Team Level",
"sec_num": null
},
{
"text": "3. BIT uses a pipeline method with a segmentation model that bridges the streaming text input and the MT model. Once a punctuation mark is detected, the segmentation sends the currently received sub-sentence for translation as in . To make the MT model adapt to translating short subsentences at inference time, each sample in the provided parallel training corpus is automatically divided into multiple translation pairs for training. A statistical word alignment tool is used to segment the source sentence into minimal chunks so that crossing alignment links between source and target words occur only within individual chunks. The parallel pairs of chunks are then used to train their MT model. 4. PingAn takes the test-time wait-k (Ma et al., 2019) as the segmentation policy. Different from the standard wait-k policy, test-time waitk uses the wait-k policy only at inference time without prefix-to-prefix training the MT model. They further adopt Back-Translation (Sennrich et al., 2016) to improve the translation quality.",
"cite_spans": [
{
"start": 736,
"end": 753,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 971,
"end": 994,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 1 Team Level",
"sec_num": null
},
{
"text": "In summary, we can categorize the four systems according to their segmentation policy: Both ICT and PingAn adopt the wait-k policy. ICT adopts training-time wait-k while PingAn uses test-time wait-k. BIT chooses sub-sentence translation, that is, to translate only when a punctuation is detected. XMU performs MU-based segmentation in which the training samples of meaningful units are generated by the MT model. Figure 3 (a) shows that the latency of the two methods using wait-k is relatively low, while MUbased policy can achieve high translation quality. For the two wait-k systems, ICT performs better than PingAn, which is consistent with the experimental results in Ma et al. (2019) that training-time wait-k is superior to test-time wait-k.",
"cite_spans": [
{
"start": 673,
"end": 689,
"text": "Ma et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 413,
"end": 425,
"text": "Figure 3 (a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Track 1 Team Level",
"sec_num": null
},
{
"text": "It's interesting to find that the latency of XMU is larger than that of BIT. This might be because there are often long-distance reorderings in the training corpus. The reordering in translation that crosses punctuation marks would prevent the MU segmentation policy from extracting fine-grained MUs, resulting in the average length of MUs exceeding sub-sentences. This problem has been illustrated in Zhang et al. (2020a) and they proposed a refined method called MU++ to alleviate the problem.",
"cite_spans": [
{
"start": 402,
"end": 422,
"text": "Zhang et al. (2020a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Track 1 Team Level",
"sec_num": null
},
{
"text": "The result of BIT is a little weird. The translation quality decreases as system latency grows. This might be caused by the discrepancy between the segmentation module and the MT model. In their method, the segmentation module segments sentences into sub-sentences while the MT model is trained on statistically split chunks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Track 1 Team Level",
"sec_num": null
},
{
"text": "As elaborated in Section 3.1, we use BLEU and Consecutive Wait (CW) (Gu et al., 2017) to evaluate systems in the speech track.",
"cite_spans": [
{
"start": 68,
"end": 85,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech-to-text Track",
"sec_num": "4.2"
},
{
"text": "PingAn and XMU continue their work based on their systems submitted to the Text-to-text track. The two systems both keep the same policy used in the first track and only replace the text input with the recognition results of an ASR model. PingAn trains a QuartzNet model (Kriman et al., 2020) with the Memory-Self-Attention (Luo et al., 2021) and XMU uses Baidu's real-time speech recognition service.",
"cite_spans": [
{
"start": 324,
"end": 342,
"text": "(Luo et al., 2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech-to-text Track",
"sec_num": "4.2"
},
{
"text": "Figure 3 (b) shows that PingAn using wait-k outperforms XMU in latency. The reason behind the large delay of XMU's system might be the same as in the first track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech-to-text Track",
"sec_num": "4.2"
},
{
"text": "Most recent studies on simultaneous translation focused on methods to balance translation quality and latency. Besides this, we will discuss some other important challenges for simultaneous translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The first problem is the shortage of high-quality simultaneous translation data. In recent years, some speech translation corpora have released, such as MuST-C (Di Gangi et al., 2019) , Covost (Wang et al., 2020a,b) , Europarl-ST (Iranzo-S\u00e1nchez et al., 2020) , Aug-LibriSpeech (Kocabiyikoglu et al., 2018) , etc. These corpora focus on Indo-European languages and have greatly contributed to the increasing popularity of research of simultaneous translation. However, there is little attention paid to research and data collection of Chinese-English (Zh\u2192En) simultaneous translation. To the best of our knowledge, only MSLT (Federmann and Lewis, 2016) and Covost (Wang et al., 2020b) contain Zh\u2192En speech translation data, but they totally have about 30 hours of speech. In our shared task, we build 68-hour Zh\u2192En speech translation corpus, BSTC (Zhang et al., 2021) for training and evaluation. The dataset alleviates the Zh\u2192En data scarcity, but it's still insufficient to train data-hungry end-to-end simultaneous translation models.",
"cite_spans": [
{
"start": 153,
"end": 183,
"text": "MuST-C (Di Gangi et al., 2019)",
"ref_id": null
},
{
"start": 193,
"end": 215,
"text": "(Wang et al., 2020a,b)",
"ref_id": null
},
{
"start": 218,
"end": 259,
"text": "Europarl-ST (Iranzo-S\u00e1nchez et al., 2020)",
"ref_id": null
},
{
"start": 262,
"end": 306,
"text": "Aug-LibriSpeech (Kocabiyikoglu et al., 2018)",
"ref_id": null
},
{
"start": 625,
"end": 652,
"text": "(Federmann and Lewis, 2016)",
"ref_id": "BIBREF6"
},
{
"start": 664,
"end": 684,
"text": "(Wang et al., 2020b)",
"ref_id": "BIBREF18"
},
{
"start": 847,
"end": 867,
"text": "(Zhang et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Scarcity",
"sec_num": "5.1"
},
{
"text": "The second problem lies in system evaluation, which has not been widely explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Dilemma",
"sec_num": "5.2"
},
{
"text": "Traditional metrics such as BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) , METEOR (Banerjee and Lavie, 2005) , etc, are designed for text translation. These metrics based on accurate matching between system outputs and references. However, to reduce latency in simultaneous interpretation, human interpreters usually use strategies such as reasonable omissions, avoiding longdistance reordering in translation, etc. Thus the traditional metrics are not suitable to evaluate the simultaneous interpretation.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 64,
"end": 82,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 92,
"end": 118,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Dilemma",
"sec_num": "5.2"
},
{
"text": "On the other hand, there is no metric to evaluate both translation quality and latency. In our shared task, we propose a novel ranking algorithm, I-MOS. We only consider the proportion of optimal points, ignoring whether the points lie in low-latency or high-latency. Therefore, our ranking doesn't differentiate latency regimes. However, it remains open to question whether it is reasonable to compare two systems with no intersection in latency, like the ICT and XMU in Figure 3 (a) . The ranking might be more convincing if ICT had provided results at high latency and XMU has provided results at low latency.",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 484,
"text": "Figure 3 (a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation Dilemma",
"sec_num": "5.2"
},
{
"text": "We note that IWSLT has also hosted simultaneous translation shared tasks 4 . They proposed to rank systems by the translation quality with different latency regimes: Low Latency: AL <= 3, Medium Latency: AL <= 6, and High Latency: AL <= 15. For each team, the submitted system that achieves the best translation quality is chosen for ranking in each latency regime. However, the value of artificially defined latency threshold between regimes has a big impact on the ranking results. As illustrated in Figure 4 , different latency thresholds lead to completely different rankings of the two teams.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 510,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Evaluation Dilemma",
"sec_num": "5.2"
},
{
"text": "Actually, the ideal ranking mechanism is to rank all systems within a similar latency interval. However, asking participants to submit results in almost every latency regime is unreasonable, because existing policies all have a preference in trading off latency and translation quality. For example, wait-k focuses on getting controllable low latency, while the inspiration behind MU is to translate until a segment with definite meaning is formed, leading to a high latency as well as high quality. Therefore, it is a dilemma to evaluate systems comprehensively while distinguishing different latency regions reasonably. This problem can be explored in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Dilemma",
"sec_num": "5.2"
},
{
"text": "Recently, more and more simultaneous translation systems have emerged in international conferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "5.3"
},
{
"text": "In practical applications, systems face robust and controllability issues. Being robust denotes the system should achieve a high translation quality and be insensitive to speech noise, including sound capture noise, speaker's accent, disfluency in speech, etc. Being controllable means the system should be able to remember and understand some named entities and should be able to be intervened.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "5.3"
},
{
"text": "Our shared task provides such an opportunity for participants to pay attention to the robustness problem. For example, ICT and PingAn have adopted data augmentation to enhance the robustness of their systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "5.3"
},
{
"text": "In terms of controllability, it is not difficult to integrate an intervention mechanism in pipeline systems. For example, a pre-defined translation of a named entity can be introduced to the MT module. However, controllability is not easy to be guaranteed for end-to-end simultaneous translation systems (Ren et al., 2020; Ma et al., 2020) . It remains a challenge to correct a translation without an intermediate ASR result. We also hope to see more work focusing on real-world simultaneous translation applications and discussing some interesting issues, such as the document-level ASR error correction in pipeline systems, and how to enhance the controllability in end-to-end speech-to-text systems, etc.",
"cite_spans": [
{
"start": 304,
"end": 322,
"text": "(Ren et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 323,
"end": 339,
"text": "Ma et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "5.3"
},
{
"text": "This paper presents the results of the Zh\u2192En simultaneous translation shared task hosted on the 2nd Workshop on Automatic Simultaneous Translation (AutoSimTrans). The shared task includes two tracks, the text-to-text track (Track1) and the speech-to-text track (Track2). Six systems were submitted to the shared task, four to Track1 and two to Track2. We propose an evaluation method \"Monotonic Optimal Sequence\" (MOS) to evaluate both translation quality and time latency. We report the results and further discuss some important open issues of simultaneous translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Regrettably, the number of submissions is less than expected, especially for the speech-to-text track. In fact, there are more than 300 teams registered. However, most of them did not submit their results. The possible reason may be that the interdisciplinary task is not easy for participants. We hope to see more participants in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://sc.cipsc.org.cn/mt/conference/2021/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://research.baidu.com/Blog/index-view?id=109",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://iwslt.org/2021/simultaneous",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Monotonic infinite lookback attention for simultaneous machine translation",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Semih",
"middle": [],
"last": "Yavuz",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1313--1323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simulta- neous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1313-1323, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Can neural machine translation do simultaneous translation?",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Masha",
"middle": [],
"last": "Esipova",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.02012"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho and Masha Esipova. 2016. Can neu- ral machine translation do simultaneous translation? arXiv preprint arXiv:1606.02012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Must-c: a multilingual speech translation corpus",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Mattia",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2012--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia A Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. Must-c: a multilingual speech translation corpus. In 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 2012-2017. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics",
"authors": [
{
"first": "George",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the second international conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "138--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the second international conference on Human Language Tech- nology Research, pages 138-145.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient wait-k models for simultaneous machine translation",
"authors": [
{
"first": "Maha",
"middle": [],
"last": "Elbayad",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Verbeek",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.08595"
]
},
"num": null,
"urls": [],
"raw_text": "Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2020. Efficient wait-k models for simultaneous ma- chine translation. arXiv preprint arXiv:2005.08595.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Microsoft speech language translation (mslt) corpus: The iwslt 2016 release for english, french and german",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2016,
"venue": "International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Federmann and William D Lewis. 2016. Mi- crosoft speech language translation (mslt) corpus: The iwslt 2016 release for english, french and ger- man. In International Workshop on Spoken Lan- guage Translation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to translate in real-time with neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1053--1062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- tor OK Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053-1062.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Europarl-st: A multilingual corpus for speech translation of parliamentary debates",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Iranzo-S\u00e1nchez",
"suffix": ""
},
{
"first": "Joan",
"middle": [
"Albert"
],
"last": "Silvestre-Cerd\u00e0",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Jorge",
"suffix": ""
},
{
"first": "Nahuel",
"middle": [],
"last": "Rosell\u00f3",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Sanchis",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Civera",
"suffix": ""
},
{
"first": "Alfons",
"middle": [],
"last": "Juan",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "8229--8233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javier Iranzo-S\u00e1nchez, Joan Albert Silvestre-Cerd\u00e0, Javier Jorge, Nahuel Rosell\u00f3, Adri\u00e0 Gim\u00e9nez, Al- bert Sanchis, Jorge Civera, and Alfons Juan. 2020. Europarl-st: A multilingual corpus for speech trans- lation of parliamentary debates. In ICASSP 2020- 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8229-8233. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Augmenting librispeech with french translations: A multimodal corpus for direct speech translation evaluation. Language Resources and Evaluation",
"authors": [
{
"first": "Laurent",
"middle": [],
"last": "Ali Can Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kraif",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Can Kocabiyikoglu, Laurent Besacier, and Olivier Kraif. 2018. Augmenting librispeech with french translations: A multimodal corpus for direct speech translation evaluation. Language Resources and Evaluation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Quartznet: Deep automatic speech recognition with 1d time-channel separable convolutions",
"authors": [
{
"first": "Stanislav",
"middle": [],
"last": "Samuel Kriman",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Beliaev",
"suffix": ""
},
{
"first": "Jocelyn",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Oleksii",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lavrukhin",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Leary",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6124--6128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Kriman, Stanislav Beliaev, Boris Ginsburg, Jo- celyn Huang, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, and Yang Zhang. 2020. Quartznet: Deep automatic speech recognition with 1d time-channel separable convolutions. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6124-6128. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unidirectional memory-self-attention transducer for online speech recognition",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Jianzong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.11594"
]
},
"num": null,
"urls": [],
"raw_text": "Jian Luo, Jianzong Wang, Ning Cheng, and Jing Xiao. 2021. Unidirectional memory-self-attention trans- ducer for online speech recognition. arXiv preprint arXiv:2102.11594.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stacl: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework",
"authors": [
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3025--3036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, et al. 2019. Stacl: Simultaneous translation with implicit antici- pation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3025-3036.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simulmt to simulst: Adapting simultaneous text translation to end-to-end simultaneous speech translation",
"authors": [
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.02048"
]
},
"num": null,
"urls": [],
"raw_text": "Xutai Ma, Juan Pino, and Philipp Koehn. 2020. Simulmt to simulst: Adapting simultaneous text translation to end-to-end simultaneous speech trans- lation. arXiv preprint arXiv:2011.02048.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Simulspeech: End-to-end simultaneous speech to text translation",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jinglin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3787--3796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Ren, Jinglin Liu, Xu Tan, Chen Zhang, QIN Tao, Zhou Zhao, and Tie-Yan Liu. 2020. Simulspeech: End-to-end simultaneous speech to text translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3787-3796.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Covost: A diverse multilingual speech-to-text translation corpus",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.01320"
]
},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Juan Pino, Anne Wu, and Jiatao Gu. 2020a. Covost: A diverse multilingual speech-to-text translation corpus. arXiv preprint arXiv:2002.01320.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Covost 2: A massively multilingual speechto-text translation corpus",
"authors": [
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.10310"
]
},
"num": null,
"urls": [],
"raw_text": "Changhan Wang, Anne Wu, and Juan Pino. 2020b. Covost 2: A massively multilingual speech- to-text translation corpus. arXiv preprint arXiv:2007.10310.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bstc: A large-scale chinese-english speech translation dataset",
"authors": [
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "Qinfei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Second Workshop on Automatic Simultaneous Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqing Zhang, Xiyang Wang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Zhi Li, ying Chen, and Qin- fei Li. 2021. Bstc: A large-scale chinese-english speech translation dataset. In Proceedings of the Second Workshop on Automatic Simultaneous Trans- lation. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dynamic sentence boundary detection for simultaneous translation",
"authors": [
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Automatic Simultaneous Translation",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqing Zhang and Chuanqiang Zhang. 2020. Dy- namic sentence boundary detection for simultaneous translation. In Proceedings of the First Workshop on Automatic Simultaneous Translation, pages 1-9, Seattle, Washington. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning adaptive segmentation policy for simultaneous translation",
"authors": [
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2280--2289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2020a. Learning adaptive segmentation policy for simultaneous translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2280-2289, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Future-guided incremental transformer for simultaneous translation",
"authors": [
{
"first": "Shaolei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Liangyou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.12465"
]
},
"num": null,
"urls": [],
"raw_text": "Shaolei Zhang, Yang Feng, and Liangyou Li. 2020b. Future-guided incremental transformer for simulta- neous translation. arXiv preprint arXiv:2012.12465.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Two examples of the results submitted by two teams. Each point shows the latency (X-axis) -BLEU (Y-axis) of a submitted system.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "An illustration of our Iterative Monotonic Optimal Sequence (I-MOS) algorithm. First (a). plot the results of all teams, then (b) (c) (d) iteratively calculate the monotonic optimal sequence (MOS) of level k and update the score of the teams belong to level 1, 2, ..., k. The X-axis denotes the average lagging.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "The evaluation results of the two tracks. The order in the legend denotes the real ranking.",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": ". IWSLT's Ranking with regimes boundary (3, 6, 15) (b). IWSLT's Evaluation with regimes boundary (4.5, 9, 13.5) An illustration of the ranking algorithm of IWSLT's simultaneous translation shared task. The two figures vary only in the threshold of the latency regimes. According to their algorithm, the winner offigure (a)is Team1 in all the three regimes, while the winner evaluated in figure (b) is Low Latency: Team2, Medium Latency: Team2, and High Latency: Team1.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"num": null,
"text": "Teams submission: t i contains all results submitted by team i Output: Teams score S: s i is the score of team i for ranking 1 tl = [0, 0, ..., 0] \u22b2 Initialize teams level",
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Algorithm 1: Iterative Monotonic Optimal</td></tr><tr><td colspan=\"3\">Sequence (I-MOS)</td></tr><tr><td/><td colspan=\"2\">Input: Number of teams N</td></tr><tr><td colspan=\"3\">Input: 2 \u22b2 tl[i] denotes the level of team i</td></tr><tr><td colspan=\"2\">3 k \u2190 1</td><td>\u22b2 Start from level 1</td></tr><tr><td colspan=\"3\">4 while N i=1 tl[i] = 0 do</td></tr><tr><td>5</td><td colspan=\"2\">for i=1, 2, ..., N do</td></tr><tr><td>6</td><td/><td>if tl[i] = 0 then</td></tr><tr><td>7</td><td/><td>s[i] \u2190 s[i] + 1</td></tr><tr><td/><td/><td>)</td></tr></table>"
},
"TABREF3": {
"num": null,
"text": "The evaluated level of each team and the proportion of points on the MOS of the corresponding level. The table shows the ranking of the teams from top to bottom.",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}