ACL-OCL / Base_JSON /prefixA /json /autosimtrans /2020.autosimtrans-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
99.3 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:09:40.571988Z"
},
"title": "Dynamic Sentence Boundary Detection for Simultaneous Translation",
"authors": [
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "zhangruiqing01@baidu.com"
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "zhangchuanqiang@baidu.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Simultaneous Translation is a great challenge in which translation starts before the source sentence finished. Most studies take transcription as input and focus on balancing translation quality and latency for each sentence. However, most ASR systems can not provide accurate sentence boundaries in realtime. Thus it is a key problem to segment sentences for the word streaming before translation. In this paper, we propose a novel method for sentence boundary detection that takes it as a multi-class classification task under the endto-end pre-training framework. Experiments show significant improvements both in terms of translation quality and latency.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Simultaneous Translation is a great challenge in which translation starts before the source sentence finished. Most studies take transcription as input and focus on balancing translation quality and latency for each sentence. However, most ASR systems can not provide accurate sentence boundaries in realtime. Thus it is a key problem to segment sentences for the word streaming before translation. In this paper, we propose a novel method for sentence boundary detection that takes it as a multi-class classification task under the endto-end pre-training framework. Experiments show significant improvements both in terms of translation quality and latency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Simultaneous Translation aims to translate the speech of a source language into a target language as quickly as possible without interrupting the speaker. Typically, a simultaneous translation system is comprised of an auto-speech-recognition (ASR) model and a machine translation (MT) model. The ASR model transforms the audio signal into the text of source language and the MT model translates the source text into the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent studies on simultaneous translation (Cho and Esipova, 2016; Ma et al., 2019; Arivazhagan et al., 2019) focus on the trade-off between translation quality and latency. They explore a policy that determines when to begin translating with the input of a stream of transcription. However, there is a gap between transcription and ASR that some ASR model doesn't provide punctuations or cannot provide accurate punctuation in realtime, while the transcription is always well-formed. See Figure 1 for illustration. Without sentence boundaries, the state-of-the-art wait-k model takes insufficient text as input and produces an incorrect translation. Therefore, sentence boundary detection (or sentence segmentation) 1 plays an important role to narrow the gap between the ASR and transcription. A good segmentation will not only improve translation quality but also reduce latency.",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Cho and Esipova, 2016;",
"ref_id": "BIBREF3"
},
{
"start": 67,
"end": 83,
"text": "Ma et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 84,
"end": 109,
"text": "Arivazhagan et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 489,
"end": 495,
"text": "Figure",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Studies of sentence segmentation falls into one of the following two bins:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The strategy performs segmentation from a speech perspective. F\u00fcgen et al. (2007) and Bangalore et al. (2012) used prosodic pauses in speech recognition as segmentation boundaries. This method is effective in dialogue scenarios, with clear silence during the conversation. However, it does not work well in long speech audio, such as lecture scenarios. According to Venuti (2012) , silence-based chunking accounts for only 6.6%, 10%, and 17.1% in English, French, and German, respectively. Indicating that in most cases, it cannot effectively detect boundaries for streaming words.",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "F\u00fcgen et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 88,
"end": 111,
"text": "Bangalore et al. (2012)",
"ref_id": "BIBREF1"
},
{
"start": 368,
"end": 381,
"text": "Venuti (2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The strategy takes segmentation as a standard text processing problem. The studies considered the problem as classification or sequence labeling, based on SVM, conditional random filed (CRFs) (Lu and Ng, 2010; Wang et al., 2012; Ueffing et al., 2013) .",
"cite_spans": [
{
"start": 194,
"end": 211,
"text": "(Lu and Ng, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 212,
"end": 230,
"text": "Wang et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 231,
"end": 252,
"text": "Ueffing et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other researches utilized language model, either based on N-gram (Wang et al., 2016) or recurrent neural network (RNN) (Tilk and Alum\u00e4e, 2015) .",
"cite_spans": [
{
"start": 65,
"end": 84,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 119,
"end": 142,
"text": "(Tilk and Alum\u00e4e, 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use classification to solve the problem of sentence segmentation from the perspective of text. Instead of predicting a sentence boundary for a certain position, we propose a multiposition boundary prediction approach. Specifically, for a source text x = {x 1 , ..., x T }, we calculate the probability of predicting sentence boundary Figure 1 : An English-to-German example that translates from a streaming source with and without sentence boundaries. We take the wait-K model (Ma et al., 2019) for illustration, K=3 here. The wait3 model first performs three READ (wait) action at the beginning of each sentence (as shown in blue), and then alternating one READ with one WRITE action in the following steps. Given the input source without sentence boundaries (in the 4 th line), the wait3 model (in the 5 th line) doesn't take the three READ action at the beginning of following sentences. Therefore, the English phrase \"it's going to\", which should have been translated as \"wird\", produced a meaningless translation \"es ist geht dass\" with limited context during wait3 model inference.",
"cite_spans": [
{
"start": 495,
"end": 512,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "after x t , t = T, T \u2212 1, ..., T \u2212 M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus the latency of translation can be controlled within L+M words, where L is the length of the sentence. Inspired by the recent pre-training techniques (Devlin et al., 2019; Sun et al., 2019 ) that successfully used in many NLP tasks, we used a pre-trained model for initialization and fine-tune the model on the source side of the sentence. Overall, the contributions are as follows:",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 176,
"end": 192,
"text": "Sun et al., 2019",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel sentence segmentation method based on pre-trained language representations, which have been successfully used in various NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our method dynamically predicts the boundary at multiple locations, rather than a specific location, achieving high accuracy with low latency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent studies show that the pre-training and finetuning framework achieves significant improvements in various NLP tasks. Generally, a model is first pre-trained on large unlabeled data. After that, on the fine-tuning step, the model is initialized by the parameters obtained by the pre-training step and fine-tuned using labeled data for specific tasks. Devlin et al. (2019) proposed a generalized framework BERT, to learn language representations based on a deep Transformer (Vaswani et al., 2017) encoder. Rather than traditionally train a language model from-left-to-right or from-rightto-left, they proposed a masked language model (MLM) that randomly replace some tokens in a sequence by a placeholder (mask) and trained the model to predict the original tokens. They also pre-train the model for the next sentence prediction (NSP) task that is to predict whether a sentence is the subsequent sentence of the first sentence. Sun et al. (2019) proposed a pre-training framework ERNIE, by integrating more knowledge. Rather than masking single tokens, they proposed to mask a group of words on different levels, such as entities, phrases, etc. The model achieves state-of-theart performances on many NLP tasks.",
"cite_spans": [
{
"start": 356,
"end": 376,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 478,
"end": 500,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 932,
"end": 949,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In this paper, we train our model under the ERNIE framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Given a streaming input x = {x 1 , ..., x t , ..., x T }, the task of sentence segmentation is to determine whether x t \u2208 x is the end of a sentence. Thus the task can be considered as a classification problem, that is p(y t |x, \u03b8), where y t \u2208 {0, 1}. However, in simultaneous translation scenario, the latency is unacceptable if we take the full source text as contextual information. Thus we should limit the context size and make a decision dynamically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method",
"sec_num": "3"
},
{
"text": "As the input is a word streaming, the sentence boundary detection problem can be transformed as, whether there exists a sentence boundary until the current word x t . Thus we can use the word streaming as a context to make a prediction. We propose a multi-class classification model to predict the probability of a few words before x t as sentence boundaries (Section 3.1). We use the ERNIE framework to first pre-train a language representation and then fine-tune it to sentence boundary detection (Section 3.2). We also propose a dynamic voted inference strategy (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Method",
"sec_num": "3"
},
{
"text": "For a streaming input x = {x 1 , ..., x t }, our goal is to detect whether there is a sentence boundary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "0 1 2 \u2026 \u22122 \u22121 \u2026 \u210e \u210e 0 1 2 \u2026 \u22122 \u22121 \u2026 \u2026 \u2026 \u03d5 0 \u2212 1 \u2212 2 Classes Masked Language Model \u2026 \u210e \u210e \u2026 \u210e \u210e \". \" \u2026 \u210e \u210e \". \" \u2026 \u210e \". \" \u210e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "Figure 2: Illustration of the dynamic classification model. M = 2 means there are 4 classes. We use ERNIE to train a classifier. Class \u03c6 means that there is no sentence boundary in the stream till now. Class \u2212m m = 0, 1, 2 means that x t\u2212m is the end of a sentence and we then put a period after it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "till the current word x t from last sentence boundary. Rather than a binary classification that detects whether x t is a sentence boundary, we propose a multi-class method. The classes are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "y = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03c6, no sentence boundary detected 0, x t is the end of a sentence \u22121, x t\u22121 is the end of a sentence ... \u2212M, x t\u2212M is the end of a sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "where M is the maximum offset size to the current state. Thus, we have M + 2 classes. See Figure 2 for illustration. We set M = 2, indicating that the model predicts 4 classes for the input stream. If the output class is \u03c6, meaning that the model does not detect any sentence boundary. Thus the model will continue receiving new words. If the output class is 0, indicating that the current word x t is the end of a sentence and we put a period after the word. Similarly, class \u2212m denotes to add a sentence boundary after x t\u2212m . While a sentence boundary is detected, the sentence will be extracted from the stream and sent to the MT system as an input for translation. The sentence detection then continues from x t\u2212m+1 . Each time our system receives a new word x t , the classifier predicts probabilities for the last M +1 words as sentence boundaries. If the output class is \u03c6, the classifier receives a new word x t+1 , and recompute the probabilities for",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "x t+1 , x t , x t\u22121 , ...,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "x t\u2212M +1 . Generally, more contextual information will help the classifier improve the precision (Section 4.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "Our training data is extracted from paragraphs. Question marks, exclamation marks, and semicolons are mapped to periods and all other punctuation symbols are removed from the corpora. Then for every two adjacent sentences in a paragraph, we concatenate them to form a long sequence, x. We record the position of the period as r and then remove the period from the sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "For",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "x = (x 1 , x 2 , ..., x N ) with N words, we gen- erate r + M samples for t = 1, 2, ..., (r + M ), in the form of < (x 1 , ..., x t ), y t >,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "where y t is the label that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = \u03c6, if t < r \u2212(t \u2212 r), if t \u2208 [r, r + M ]",
"eq_num": "(1)"
}
],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "Note that if the length of the second sentence is less than M, we concatenate subsequent sentences until r + M samples are collected. Then we define the loss function as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J (\u03b8) = (x,r)\u2208D log( r\u22121 t=1 p(y t = \u03c6|x \u2264t ; \u03b8) + r+M t=r p(y t = \u2212(t \u2212 r))|x \u2264t ; \u03b8))",
"eq_num": "(2)"
}
],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "where D is the dataset that contains pairs of concatenated sentences x and its corresponding position of the removed periods r. M is a hyperparameter denotes the number of waiting words. Note that our method differs from previous work in the manner of classification. Sridhar et al. 2013predicts whether a word x t labeled as the end of a sentence or not by a binary classification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "p(y t = 0|x t+2 t\u22122 ) + p(y t = 1|x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "t+2 t\u22122 ) = 1 (3) where y t = 0 means x t is not the end of a sentence and y t = 1 means x t is the end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "x t+2 t\u22122 denotes 5 words x t\u22122 , x t\u22121 , ..., x t+2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "Some other language-model based work (Wang et al., 2016) calculates probabilities over all words in the vocabulary including the period:",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w\u2208V \u222a\".\" p(y t = w|x \u2264t ) = 1",
"eq_num": "(4)"
}
],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "and decides whether x t is a sentence boundary by comparing the probability of y t =\".\" and y t =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "x t+1 . 1 2 \u2026 \u22124 \u22123 \u22122 1 2 \u2026 \u22124 \u22123 \u22122 \u22121 1 2 \u2026 \u22124 \u22123 \u22122 \u22121 = 0| 1 , \u2026 , \u22122 = \u22121| 1 , \u2026 , \u22121 = \u22122| 1 , \u2026 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "Figure 3: Our voting algorithm for online prediction with M equals to 2. Input the stream text till x t , the overall probability of add a sentence boundary after x t\u22122 is averaged by the M + 1 probabilities in red, while for x t\u22121 (in green) and x t (in blue), the number of deterministic probability is less than M + 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "The performance of these methods is limited by incomplete semantics, without considering global boundary detection. In our methods, we leverage more future words and restrict classes globally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "p(y t = \u03c6|x \u2264t ) + M m=0 p(y t = \u2212m|x \u2264t ) = 1 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "The restriction is motivated that in a lecture scenario, where a sentence could not be very short that contains only 1 or 2 words. Thus, the probability distribution prohibits that adjacent words to be the end of sentences at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.2"
},
{
"text": "At inference time, we predict sentence boundaries sequentially with a dynamic voting strategy. Each time a new word x t is received, we predict the probability of M + 1 classes as shown in the bottom of Figure 3 , then calculate if the probability of previ-",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dynamic Inference",
"sec_num": "3.3"
},
{
"text": "ous M + 1 positions (x t\u2212M , x t\u2212M +1 , x t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Inference",
"sec_num": "3.3"
},
{
"text": "is larger then a threshold \u03b8 T h . If yes, we add a sentence boundary at the corresponding position. Otherwise, we continue to receive new words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Inference",
"sec_num": "3.3"
},
{
"text": "Note that the probability is adopted as the voted probability. While the probability of adding a sentence boundary after x t\u2212M has M + 1 probabilities to calculate the average, the number of probabilities to determine whether it is a sentence boundary at subsequent positions is less than M + 1. Here we use the voted average of existing probabilities. Specifically, to judge whether x t is a sentence boundary, it needs t \u2212 t + 1 probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Inference",
"sec_num": "3.3"
},
{
"text": "1 t \u2212 t + 1 t\u2212t m=0 p(y = \u2212m|x 1 , ..., x t+m ) (6) where t \u2208 [t \u2212 M, t].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Inference",
"sec_num": "3.3"
},
{
"text": "If more than one sentence boundary probabilities for x t\u2212M , ..., x t exceeds the threshold \u03b8 T h at the same time, we choose the front-most position as a sentence boundary. This is consistent with our training process, that is, if there is a sample of two or more sentence boundaries, we ignore the following and label the class y t according to the first boundary. This is because we generate samples with each period in the original paragraph as depicted in Section 3.2. From another point of view, the strategy can also compensate for some incorrect suppression of adjacent boundaries, thereby improving online prediction accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Inference",
"sec_num": "3.3"
},
{
"text": "Experiments are conducted on English-German (En-De) simultaneous translation. We evaluate 1) the F-score 2 of sentence boundary detection and 2) case-sensitive tokenized 4-gram BLEU (Papineni et al., 2002) as the final translation effect of the segmented sentences. To reduce the impact of the ASR system, we use the transcription without punctuation in both training and evaluation.",
"cite_spans": [
{
"start": 182,
"end": 205,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "The datasets used in our experiments are listed in Table 1 . We use two parallel corpus from machine translation task: WMT 14 3 and IWSLT 14 4 . WMT 14 is a text translation corpus including 4.4M sentences, mainly on news and web sources. And IWSLT 14 is a speech translation corpus of TED lectures with transcribed text and corresponding translation. Here we only use the text part in it, containing 0.19M sentences in the training set. We train the machine translation model on WMT 14 with the base version of the Transformer model (Vaswani et al., 2017) , achieving a BLEU score of 27.2 on newstest2014. And our sentence boundary detection model is trained on the source transcription of IWSLT 14 unless otherwise specified (Section 4.3). To evaluate the system performance, we merge the IWSLT test set of 4 years (2010-2014) to construct a big test set of 7040 sentences. The overall statistics of our dataset is shown in Table 1 . We evaluate our model and two existing methods listed below:",
"cite_spans": [
{
"start": 534,
"end": 556,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 1",
"ref_id": null
},
{
"start": 926,
"end": 933,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "\u2022 dynamic-base is our proposed method that detect sentence boundaries dynamically using a multi-class classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "\u2022 dynamic-force adds a constraint on dynamicbase. In order to keep in line with (Wang et al., 2016) , we add a constraint that sentence should be force segmented if longer than \u03b8 l .",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "\u2022 N-gram is the method using an N-gram language model to compare the probability of adding vs. not adding a boundary at x t after receiving x t\u2212N +1 , ..., x t . We implement according to (Wang et al., 2016 ).",
"cite_spans": [
{
"start": 188,
"end": 206,
"text": "(Wang et al., 2016",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "\u2022 T-LSTM uses a RNN-based classification model with two classes. We implement a unidirectional RNN and perform training according to (Tilk and Alum\u00e4e, 2015) 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Our classifier in dynamic-base and dynamicforce is trained under ERNIE base framework. We use the released 6 parameters obtained at pretraining step as initialization. In the fine-tuning stage, we use a learning rate of 2e \u22125 . 5 we only keep the two classes of period and \u03c6 in this work 6 https://github.com/PaddlePaddle/ERNIE Table 2 reports the results of source sentence segmentation on En-De translation, where the latency is measured by Consecutive Wait (CW) (Gu et al., 2017) , the number of words between two translate actions. To eliminate the impact of the different policies in simultaneous translation, we only execute translation at the end of each sentence. Therefore, the CW here denotes the sentence length L plus the number of future words M . We calculate its average and maximum value as \"avgCW\" and \"maxCW\", respectively. Better performance expect high F-score, BLEU, and low latency (CW). The translation effect obtained by using the groundtruth period as the sentence segmentation is shown in the first line of Oracle.",
"cite_spans": [
{
"start": 228,
"end": 229,
"text": "5",
"ref_id": null
},
{
"start": 465,
"end": 482,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "The N-gram method calculate the probability of add (p add ) and not add (p not ) period at each position, and decide whether to chunk by comparing whether p add /p not exceeds \u03b8 T h . The N-gram method without threshold tuning (with \u03b8 T h = e 0.0 ) divides sentences into small pieces, achieving the lowest average latency of 6.64. However, the Fscore of segmentation is very low because of the incomplete essence of the n-gram feature. Notable, the precision and recall differs much (precision = 0.33, recall = 0.78) in this setup. Therefore, we need to choose a better threshold by grid search (Wang et al., 2016) . With \u03b8 T h equals to e 2.0 , the F-score of N-gram method increased a little bit (0.46 \u2192 0.48), with a more balanced precision and recall (precision = 0.51, recall = 0.48). However, the max latency runs out of control, resulting in a maximum of 161 words in a sentence. We also tried to shorten the latency of the N-gram method by force segmentation (Wang et al., 2016) , but the result was very poor (precision = 0.33, recall = 0.40).",
"cite_spans": [
{
"start": 596,
"end": 615,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 968,
"end": 987,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Results",
"sec_num": "4.1"
},
{
"text": "The T-LSTM method with the hidden size of 256 performs better than N-gram, but the F-score and BLEU is still limited. On the contrary, our dynamic-based approaches with M = 1 achieve the best F-score at 0.74 and the final translation is very close to the result of Oracle. In particular, the precision and recall reached about 0.72 and 0.77 in both dynamic-force and dynamic-base, respectively. Accurate sentence segmentation brings better performance in translation, bringing an improvement of 1.55 over T-LSTM. Moreover, our approach is not inferior in terms of latency. Both average latency and max latency is controlled at a relatively low level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Results",
"sec_num": "4.1"
},
{
"text": "It is interesting to note that, dynamic-force performs better than dynamic-base, in terms of latency and BLEU. This suggests the effectiveness of the force segmentation strategy, that is, select the chunking location with a sentence length limitation will not affect the accuracy of segmentation, and would enhance the translation effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Results",
"sec_num": "4.1"
},
{
"text": "According to Section 3.2, the order between sentences of original corpora would affect the generation of training samples. In this section, we investigate the effect of various data reordering strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Magic in Data Processing",
"sec_num": "4.2"
},
{
"text": "A basic method is to use the original sentence order of speech corpora, denote as Basic. However, the samples generated is limited, which makes the model easy to over-fit. To overcome this problem, we adopt two methods to expand data scale: 1) Duplicate the original data multiple times or 2) Add Synthetic adjacent sentences, through randomly selecting two sentences from the corpora. These two methods greatly expand the total amount of data, but the gain to the model is uncertain. As an alternative, we explore a Sort method, to sort sentences according to alphabetic order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Magic in Data Processing",
"sec_num": "4.2"
},
{
"text": "The performance of the four training data organization methods is shown in Figure 4 , all built on IWSLT2014 and conducted under the setup of M = 1 and \u03b8 l = 40. It is clear that Basic, Duplicate and Synthetic are all involved in the problem of over-fitting. They quickly achieved their best results and then gradually declined. Surprisingly, the Sort approach is prominent in both segmentation accuracy and translation performance. This may be due to the following reasons: 1) Sentence classification is not a difficult task, especially when M = 1 for 3-class classification (y \u2208 [\u03c6, 0, \u22121]), making the task easy to over-fit. 2) Compared with Basic, Duplicate is more abundant in the sample combination in batch training, but there is no essential difference between the two methods. 3) Synthetic hardly profits our model, because the synthesized data may be very simple due to random selection. 4) Sort may simulate difficult cases in real scenes and train them pertinently, bringing it a poor performance at start but not prone to overfit. There are many samples with identical head and tail words in the sorted data, such as: \"and it gives me a lot of hope and ...\" and \"that means there's literally thousands of new ideas that ... \". Even human beings find it difficult to determine whether the words before is sentence boundaries of these samples. In Basic, Duplicate and Synthetic methods, such samples are usually submerged in a large quantity of simple samples. However, the data organization mode of Sort greatly strengthens the model's ability to learn these difficult samples.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Magic in Data Processing",
"sec_num": "4.2"
},
{
"text": "There is no need to worry that the Sort method cannot cover simple samples. Because we sort by rows in source file, and some of the rows contain multiple sentences (an average of 1.01 sentences per row), which are in real speech order. We argue that these sentences are sufficient to model the classification of simple samples, based on the rapid overfit performance of the other three methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Magic in Data Processing",
"sec_num": "4.2"
},
{
"text": "Next, we turn to the question that how does the domain of training corpus affects results. With the test set unchanged, we compare the sentence boundary detections model trained on out-of-domain corpora WMT 14 and in-domain corpora IWSLT 14, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain vs. In-Domain",
"sec_num": "4.3"
},
{
"text": "As mentioned before, WMT 14 is a larger text translation corpus mainly on news and web sources. But the test set comes from IWSLT, which contains transcriptions of TED lectures of various directions. Intuitively, larger dataset provides more diverse samples, but due to domain changes, it does not necessarily lead to improvements in accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain vs. In-Domain",
"sec_num": "4.3"
},
{
"text": "The performance of various models trained on WMT14 is shown in Table 3 . Dynamic-force also achieves the best translation performance with a relatively small latency on average and limited the max latency within 40 words. However, it underperforms the same model trained on IWSLT2014 (as shown in Table 2 ), demonstrating its sensitivity to the training domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 297,
"end": 304,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Out-of-Domain vs. In-Domain",
"sec_num": "4.3"
},
{
"text": "On the contrary, N-gram and T-LSTM is hardly affected. For N-gram, one possible reason is the before mentioned weakness of the N-gram: segmentation depends on only N previous words, which is more steady compared to the whole sentence, thus eliminating the perturbation of whole sentence brought by the domain variation. For T-LSTM, it even improves a little compared with its in-domain performance. This may be due to the lack of training samples. 0.19M sentences of IWSLT2014 is insufficient to fit the parameters of T-LSTM. Thus the model would benefit from increasing the corpus size. However, our method needs less data in training because our model has been pre-trained. Based on a powerful representation, we need only a small amount of training data in fine-tuning, which is best aligned with the test set in the domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain vs. In-Domain",
"sec_num": "4.3"
},
{
"text": "Next, we discuss the effect of changing \u03b8. The performance of dynamic-force with varying \u03b8 l is shown in Table 4 . Smaller \u03b8 l brings shorter latency, as well as worse performance. The effect is extremely poor with \u03b8 l = 10. There are two possible reasons: 1) Constraint sentence length less than \u03b8 l is too harsh under small \u03b8 l , 2) The discrepancy between the unrestricted training and length-restricted testing causes the poor effect. We first focus on the second possible reason. While the difference between dynamic-base and dynamic-force is only in prediction, we want to know whether we can achieve better results by controlling the length of training samples. Accordingly, we only use the samples shorter than a fixed value: \u03b8 l in training phrase. At inference time, we use both dynamic-force with the same sentence length constraint \u03b8 l and dynamic-base to predict sentence boundaries. As elaborated in Figure 5 , For each pair of curves with a same \u03b8 l , dynamic-force and dynamic-base present similar performance. This demonstrates the main reason for the poor performance with small \u03b8 l is not the training-testing discrepancy but lies in the first reason that the force constraint is too harsh.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 914,
"end": 922,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Length of window \u03b8 l",
"sec_num": "4.4"
},
{
"text": "Moreover, it is interesting to find that the performance of \u03b8 l = 80 is similar with \u03b8 l = 40 at the beginning but falls a little during training. This probably because the setup with \u03b8 l = 40 can filter some inaccurate cases, as the average number of words in IWSLT2014 training set is 20.26.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length of window \u03b8 l",
"sec_num": "4.4"
},
{
"text": "We investigate whether can we achieve better performance with more or less future words. We experiment with M from 0 to 5. The result is shown Figure 5 : Translation performance on IWSLT2014 testset. \"\u03b8 l -Force\" denotes to set the sentence length threshold to \u03b8 l in both training sample generation and prediction. \"\u03b8 l -Base\" is to set this constraint only in training samples generation process. in Table 5 . Reducing M to zero means that do not refer to any future words in prediction. This degrades performance a lot, proving the effectiveness of adding future words in prediction. Increase M from 1 to 2 also promote the performance in both sentence boundary detection f-score and the system BLEU. However, as more future words added (increase M to 3 and 4), the improvement becomes less obvious.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 5",
"ref_id": null
},
{
"start": 402,
"end": 409,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Number of Future Words M",
"sec_num": "4.5"
},
{
"text": "Sentence boundary detection has been explored for years, but the majority of these work focuses on offline punctuation restoration, instead of applied in simultaneous translation. Existing work can be divided into two classes according to the model input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Some work takes a fixed size of words as input. Focus on utilizing a limited size of the streaming input, they predict the probability of putting a boundary at a specific position x t by a N-gram lan-guage model (Wang et al., 2016) or a classification model Yarmohammadi et al., 2013) . The language-model based method make decision depends on N words (x t\u2212N +2 , ..., x t+1 ) and compares its probability with (x t\u2212N +2 , ..., x t ,\".\"). The classification model takes features of N words around x t and classifies to two classes denoting x t is a sentence boundary or not. The main deficiency of this method is that the dependencies outside the input window are lost, resulting in low accuracy.",
"cite_spans": [
{
"start": 212,
"end": 231,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 258,
"end": 284,
"text": "Yarmohammadi et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based methods",
"sec_num": "5.1"
},
{
"text": "Some other work focuses on restoring punctuation and capitalization using the whole sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Whole sentence-based methods",
"sec_num": "5.2"
},
{
"text": "To improve the sentence boundary classification accuracy, some work upgrade the N-gram input to variable-length input by using recurrent neural network (RNN) (Tilk and Alum\u00e4e, 2015; Salloum et al., 2017) . Some other work takes punctuation restoration as a sequence labeling problem and investigates using Conditional Random Fields (CRFs) (Lu and Ng, 2010; Wang et al., 2012; Ueffing et al., 2013) . Peitz et al. (2011) and Cho et al. (2012) treats this problem as a machine translation task, training to translate non-punctuated transcription into punctuated text. However, all these methods utilize the whole sentence information, which is not fit for the simultaneous translation scenario. Moreover, the translation model based methods require multiple steps of decoding, making it unsuitable for online prediction.",
"cite_spans": [
{
"start": 158,
"end": 181,
"text": "(Tilk and Alum\u00e4e, 2015;",
"ref_id": "BIBREF14"
},
{
"start": 182,
"end": 203,
"text": "Salloum et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 339,
"end": 356,
"text": "(Lu and Ng, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 357,
"end": 375,
"text": "Wang et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 376,
"end": 397,
"text": "Ueffing et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 400,
"end": 419,
"text": "Peitz et al. (2011)",
"ref_id": "BIBREF10"
},
{
"start": 424,
"end": 441,
"text": "Cho et al. (2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Whole sentence-based methods",
"sec_num": "5.2"
},
{
"text": "In this paper, we propose an online sentence boundary detection approach. With the input of streaming words, our model predicts the probability of multiple positions rather than a certain position. By adding this adjacent position constraint and using dynamic prediction, our method achieves higher accuracy with lower latency. We also incorporate the pre-trained technique, ERNIE to implement our classification model. The empirical results on IWSLT2014 demonstrate that our approach achieves significant improvements of 0.19 F-score on sentence segmentation and 1.55 BLEU points compared with the language-model based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We use both terms interchangeably in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "harmonic average of the precision and recall 3 http://www.statmt.org/wmt14/translation-task.html 4 https://wit3.fbk.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Monotonic infinite lookback attention for simultaneous machine translation",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Semih",
"middle": [],
"last": "Yavuz",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simulta- neous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Real-time incremental speech-tospeech translation of dialogs",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "Prakash",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Ladan",
"middle": [],
"last": "Kolan",
"suffix": ""
},
{
"first": "Aura",
"middle": [],
"last": "Golipour",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jimenez",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore, Vivek Kumar Rangarajan Srid- har, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-to- speech translation of dialogs. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Segmentation and punctuation prediction in speech language translation using a monolingual translation system",
"authors": [
{
"first": "Eunah",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2012,
"venue": "International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eunah Cho, Jan Niehues, and Alex Waibel. 2012. Seg- mentation and punctuation prediction in speech lan- guage translation using a monolingual translation system. In International Workshop on Spoken Lan- guage Translation (IWSLT) 2012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Can neural machine translation do simultaneous translation?",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Masha",
"middle": [],
"last": "Esipova",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.02012"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho and Masha Esipova. 2016. Can neu- ral machine translation do simultaneous translation? arXiv preprint arXiv:1606.02012.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of NAACL-HLT 2019.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Simultaneous translation of lectures and speeches",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "F\u00fcgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Muntsin",
"middle": [],
"last": "Kolss",
"suffix": ""
}
],
"year": 2007,
"venue": "Machine translation",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian F\u00fcgen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine translation, 21(4).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to translate in real-time with neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- tor OK Li. 2017. Learning to translate in real-time with neural machine translation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Better punctuation prediction with dynamic conditional random fields",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Lu and Hwee Tou Ng. 2010. Better punctuation prediction with dynamic conditional random fields. In Proceedings of the 2010 conference on empirical methods in natural language processing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "STACL: simultaneous translation with integrated anticipation and controllable latency",
"authors": [
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, and Haifeng Wang. 2019. STACL: simul- taneous translation with integrated anticipation and controllable latency. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Modeling punctuation prediction as machine translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Peitz",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "Mauser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2011,
"venue": "International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Peitz, Markus Freitag, Arne Mauser, and Her- mann Ney. 2011. Modeling punctuation prediction as machine translation. In International Workshop on Spoken Language Translation (IWSLT) 2011.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep learning for punctuation restoration in medical reports",
"authors": [
{
"first": "Wael",
"middle": [],
"last": "Salloum",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Suendermann-Oeft",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wael Salloum, Gregory Finley, Erik Edwards, Mark Miller, and David Suendermann-Oeft. 2017. Deep learning for punctuation restoration in medical re- ports. In BioNLP 2017.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Segmentation strategies for streaming speech translation",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bangalore",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengal- varayan. 2013. Segmentation strategies for stream- ing speech translation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ernie: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Lstm for punctuation restoration in speech transcripts",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth annual conference of the international speech communication association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2015. Lstm for punctu- ation restoration in speech transcripts. In Sixteenth annual conference of the international speech com- munication association.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improved models for automatic punctuation prediction for spoken and written text",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Vozila",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Ueffing, Maximilian Bisani, and Paul Vozila. 2013. Improved models for automatic punctuation prediction for spoken and written text. In Inter- speech.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "31st Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The translation studies reader",
"authors": [
{
"first": "L",
"middle": [],
"last": "Venuti",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Venuti. 2012. The translation studies reader.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An efficient and effective online sentence segmenter for simultaneous interpretation",
"authors": [
{
"first": "Xiaolin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Asian Translation (WAT2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaolin Wang, Andrew Finch, Masao Utiyama, and Ei- ichiro Sumita. 2016. An efficient and effective on- line sentence segmenter for simultaneous interpreta- tion. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic conditional random fields for joint sentence boundary and punctuation prediction",
"authors": [
{
"first": "Xuancong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Khe Chai",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sim",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuancong Wang, Hwee Tou Ng, and Khe Chai Sim. 2012. Dynamic conditional random fields for joint sentence boundary and punctuation prediction. In Thirteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Incremental segmentation and decoding strategies for simultaneous translation",
"authors": [
{
"first": "Mahsa",
"middle": [],
"last": "Yarmohammadi",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sankaran",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahsa Yarmohammadi, Vivek Kumar Rangarajan Srid- har, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proceed- ings of the Sixth International Joint Conference on Natural Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Performance evaluated on IWSLT14 testset for different training sample building strategies.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Reference Eines von zwei Dingen wird passieren.Entweder wird ...",
"content": "<table><tr><td>src</td><td>One of two things is</td><td>going to</td><td colspan=\"2\">happen .</td><td>Either</td><td>it 's</td><td>going</td><td>to</td><td>\u2026</td></tr><tr><td>wait3</td><td colspan=\"3\">Eines von zwei Dingen wird</td><td>passieren.</td><td/><td/><td colspan=\"3\">Entweder wird \u2026</td></tr><tr><td>src without boundary</td><td>One of two things is</td><td>going to</td><td colspan=\"2\">happen either</td><td>it</td><td colspan=\"2\">'s going to</td><td>\u2026</td></tr><tr><td>wait3</td><td colspan=\"3\">Eines von zwei Dingen wird</td><td colspan=\"3\">passieren entweder es ist</td><td>geht</td><td colspan=\"2\">dass \u2026</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Segmentation Performance trained on IWSLT2014. All methods are conducted with future words M equals to 1.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Segmentation Performance trained on WMT14. All methods are conducted with future words M equals to 1. N-gram uses grid-search to get the best hyperparamters. dyn is short for dynamic and dynamicforce adopts \u03b8 l = 40.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"text": "Segmentation Performance of dynamic-force trained on IWSLT2014. All methods are conducted with future words M equals to 1.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"text": "Segmentation Performance of dynamic-force trained on IWSLT2014. All methods are conducted with \u03b8 l = 40.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}