Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:01:15.873014Z"
},
"title": "Pre-training on High-Resource Speech Recognition Improves Low-Resource Speech-to-Text Translation",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "UK"
}
},
"email": "sameer.bansal@inf.ed.ac.uk"
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stellenbosch University",
"location": {
"country": "South Africa"
}
},
"email": "kamperh@sun.ac.za"
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"country": "USA"
}
},
"email": "klivescu@ttic.edu"
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "UK"
}
},
"email": "alopez@inf.ed.ac.uk"
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish-English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.",
"pdf_parse": {
"paper_id": "N19-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish-English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech-to-text Translation (ST) has many potential applications for low-resource languages: for example in language documentation, where the source language is often unwritten or endangered (Besacier et al., 2006; Martin et al., 2015; Adams et al., 2016a,b; Anastasopoulos and Chiang, 2017) ; or in crisis relief, where emergency workers might need to respond to calls or requests in a foreign language (Munro, 2010) . Traditional ST is a pipeline of automatic speech recognition (ASR) and machine translation (MT), and thus requires transcribed source audio to train ASR and parallel text to train MT. These resources are often unavailable for low-resource languages, but for our potential applications, there may be some source language audio paired with target language text translations. In these scenarios, end-to-end ST is appealing.",
"cite_spans": [
{
"start": 190,
"end": 213,
"text": "(Besacier et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 214,
"end": 234,
"text": "Martin et al., 2015;",
"ref_id": "BIBREF29"
},
{
"start": 235,
"end": 257,
"text": "Adams et al., 2016a,b;",
"ref_id": null
},
{
"start": 258,
"end": 290,
"text": "Anastasopoulos and Chiang, 2017)",
"ref_id": "BIBREF3"
},
{
"start": 403,
"end": 416,
"text": "(Munro, 2010)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, Weiss et al. (2017) showed that endto-end ST can be very effective, achieving an impressive BLEU score of 47.3 on Spanish-English ST. But this result required over 150 hours of translated audio for training, still a substantial resource requirement. By comparison, a similar system trained on only 20 hours of data for the same task achieved a BLEU score of 5.3 (Bansal et al., 2018) . Other low-resource systems have similarly low accuracies (Anastasopoulos and Chiang, 2018; B\u00e9rard et al., 2018) .",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Weiss et al. (2017)",
"ref_id": "BIBREF47"
},
{
"start": 372,
"end": 393,
"text": "(Bansal et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 453,
"end": 486,
"text": "(Anastasopoulos and Chiang, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 487,
"end": 507,
"text": "B\u00e9rard et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To improve end-to-end ST in low-resource settings, we can try to leverage other data resources. For example, if we have transcribed audio in the source language, we can use multi-task learning to improve ST (Anastasopoulos and Chiang, 2018; Weiss et al., 2017; B\u00e9rard et al., 2018) . But source language transcriptions are unlikely to be available in our scenarios of interest.",
"cite_spans": [
{
"start": 207,
"end": 240,
"text": "(Anastasopoulos and Chiang, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 241,
"end": 260,
"text": "Weiss et al., 2017;",
"ref_id": "BIBREF47"
},
{
"start": 261,
"end": 281,
"text": "B\u00e9rard et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Could we improve low-resource ST by leveraging data from a high-resource language? For ASR, training a single model on multiple languages can be effective for all of them (Toshniwal et al., 2018b; Deng et al., 2013) . For MT, transfer learning (Thrun, 1995) has been very effective: pretraining a model for a high-resource language pair and transferring its parameters to a low-resource language pair when the target language is shared (Zoph et al., 2016; Johnson et al., 2017) . Inspired by these successes, we show that low-resource ST can leverage transcribed audio in a high-resource target language, or even a different language altogether, simply by pre-training a model for the high-resource ASR task, and then transferring and fine-tuning some or all of the model's parameters for low-resource ST.",
"cite_spans": [
{
"start": 171,
"end": 196,
"text": "(Toshniwal et al., 2018b;",
"ref_id": "BIBREF45"
},
{
"start": 197,
"end": 215,
"text": "Deng et al., 2013)",
"ref_id": null
},
{
"start": 244,
"end": 257,
"text": "(Thrun, 1995)",
"ref_id": "BIBREF42"
},
{
"start": 436,
"end": 455,
"text": "(Zoph et al., 2016;",
"ref_id": "BIBREF51"
},
{
"start": 456,
"end": 477,
"text": "Johnson et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first test our approach using Spanish as the source language and English as the target. After training an ASR system on 300 hours of English, fine-tuning on 20 hours of Spanish-English yields a BLEU score of 20.2, compared to only 10.8 for an ST model without ASR pre-training. Analyzing this result, we discover that the main benefit of pre-training arises from the transfer of the encoder parameters, which model the input acoustic signal. In fact, this effect is so strong that we also obtain improvements by pre-training on a language that differs from both the source and the target: pre-training on French and fine-tuning on Spanish-English. We hypothesize that pre-training the encoder parameters, even on a different language, allows the model to better learn about linguistically meaningful phonetic variation while normalizing over acoustic variability such as speaker and channel differences. We conclude that the acousticphonetic learning problem, rather than translation itself, is one of the main difficulties in low-resource ST. A final set of experiments confirm that ASR pretraining also helps on another language pair where the input is truly low-resource: Mboshi-French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For both ASR and ST, we use an encoder-decoder model with attention adapted from Weiss et al. (2017), B\u00e9rard et al. (2018) and Bansal et al. (2018) , as shown in Figure 1 . We use the same model architecture for all our models, allowing us to conveniently transfer parameters between them. We also constrain the hyper-parameter search to fit a model into a single Titan X GPU, allowing us to maximize available compute resources.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "B\u00e9rard et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 127,
"end": 147,
"text": "Bansal et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "We use a pre-trained English ASR model to initialize training of Spanish-English ST models, and a pre-trained French ASR model to initialize training of Mboshi-French ST models. During ST training, all model parameters are updated. In these configurations, the decoder shares the same vocabulary across the ASR and ST tasks. This is practical for settings where the target text language is highresource with ASR data available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "In settings where both ST languages are lowresource, ASR data may only be available in a third language. To test whether transfer learning will help in this setting, we use a pre-trained French ASR model to train Spanish-English ST models; and English ASR for Mboshi-French models. In these cases, the ST languages are different from the ASR language, so we can only transfer the encoder parameters of the ASR model, since the dimensions of the decoder's output softmax layer are indexed by the vocabulary, which is not shared. 1 Sharing only the speech encoder parameters is much easier, since the speech input can be preprocessed in the same manner for all languages. This form of transfer learning is more flexible, as there are no constraints on the ASR language used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "3 Experimental Setup 3.1 Data sets English ASR. We use the Switchboard Telephone speech corpus (Godfrey and Holliman, 1993) , which consists of around 300 hours of English speech and transcripts, split into 260k utterances. The development set consists of 5 hours that we removed from the training set, split into 4k utterances.",
"cite_spans": [
{
"start": 95,
"end": 123,
"text": "(Godfrey and Holliman, 1993)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "French ASR. We use the French speech corpus from the GlobalPhone collection (Schultz, 2002) , which consists of around 20 hours of high quality read speech and transcripts, split into 9k utterances. The development set consists of 2 hours, split into 800 utterances.",
"cite_spans": [
{
"start": 76,
"end": 91,
"text": "(Schultz, 2002)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Spanish-English ST. We use the Fisher Spanish speech corpus (Graff et al., 2010) , which consists of 160 hours of telephone speech in a variety of Spanish dialects, split into 140K utterances. To simulate low-resource conditions, we construct smaller train-ing corpora consisting of 50, 20, 10, 5, or 2.5 hours of data, selected at random from the full training data. The development and test sets each consist of around 4.5 hours of speech, split into 4K utterances. We do not use the corresponding Spanish transcripts; our target text consists of English translations that were collected through crowdsourcing (Post et al., 2013 (Post et al., , 2014 .",
"cite_spans": [
{
"start": 60,
"end": 80,
"text": "(Graff et al., 2010)",
"ref_id": null
},
{
"start": 612,
"end": 630,
"text": "(Post et al., 2013",
"ref_id": "BIBREF33"
},
{
"start": 631,
"end": 651,
"text": "(Post et al., , 2014",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Mboshi-French ST. Mboshi is a Bantu language spoken in the Republic of Congo, with around 160,000 speakers. 2 We use the Mboshi-French parallel corpus (Godard et al., 2018) , which consists of around 4 hours of Mboshi speech, split into a training set of 5K utterances and a development set of 500 utterances. Since this corpus does not include a designated test set, we randomly sampled and removed 200 utterances from training to use as a development set, and use the designated development data as a test set.",
"cite_spans": [
{
"start": 108,
"end": 109,
"text": "2",
"ref_id": null
},
{
"start": 151,
"end": 172,
"text": "(Godard et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Speech. We convert raw speech input to 13dimensional MFCCs using Kaldi (Povey et al., 2011) . 3 We also perform speaker-level mean and variance normalization.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF35"
},
{
"start": 94,
"end": 95,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Text. The target text of the Spanish-English data set contains 1.5M word tokens and 17K word types. If we model text as sequences of words, our model cannot produce any of the unseen word types in the test data and is penalized for this, but it can be trained very quickly (Bansal et al., 2018) . If we instead model text as sequences of characters as done by Weiss et al. 2017, we would have 7M tokens and 100 types, resulting in a model that is open-vocabulary, but very slow to train (Bansal et al., 2018) . As an effective middle ground, we use byte pair encoding (BPE; Sennrich et al., 2016) to segment each word into subwords, each of which is a character or a high-frequency sequence of characters-we use 1000 of these high-frequency sequences. Since the set of subwords includes the full set of characters, the model is still open vocabulary; but it results in a text with only 1.9M tokens and just over 1K types, which can be trained almost as fast as the word-level model.",
"cite_spans": [
{
"start": 273,
"end": 294,
"text": "(Bansal et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 487,
"end": 508,
"text": "(Bansal et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 574,
"end": 596,
"text": "Sennrich et al., 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "The vocabulary for BPE depends on the fre-quency of character sequences, so it must be computed with respect to a specific corpus. For English, we use the full 160-hour Spanish-English ST target training text. For French, we use the Mboshi-French ST target training text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "3.3 Model architecture for ASR and ST Speech encoder. As shown schematically in Figure 1, MFCC feature vectors, extracted using a window size of 25 ms and a step size of 10ms, are fed into a stack of two CNN layers, with 128 and 512 filters with a filter width of 9 frames each. In each CNN layer we stride with a factor of 2 along time, apply a ReLU activation (Nair and Hinton, 2010) , and apply batch normalization (Ioffe and Szegedy, 2015) . The output of the CNN layers is fed into a three-layer bi-directional long short term memory network (LSTM; Hochreiter and Schmidhuber, 1997) ; each hidden layer has 512 dimensions.",
"cite_spans": [
{
"start": 362,
"end": 385,
"text": "(Nair and Hinton, 2010)",
"ref_id": "BIBREF31"
},
{
"start": 418,
"end": 443,
"text": "(Ioffe and Szegedy, 2015)",
"ref_id": "BIBREF20"
},
{
"start": 554,
"end": 587,
"text": "Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 80,
"end": 86,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Text decoder. At each time step, the decoder chooses the most probable token from the output of a softmax layer produced by a fully-connected layer, which in turn receives the current state of a recurrent layer computed from previous time steps and an attention vector computed over the input. Attention is computed using the global attentional model with general score function and inputfeeding, as described in Luong et al. (2015) . The predicted token is then fed into a 128-dimensional embedding layer followed by a three-layer LSTM to update the recurrent state; each hidden state has 256 dimensions. While training, we use the predicted token 20% of the time as input to the next decoder step and the training token for the remaining 80% of the time (Williams and Zipser, 1989) . At test time we use beam decoding with a beam size of 5 and length normalization (Wu et al., 2016) with a weight of 0.6.",
"cite_spans": [
{
"start": 413,
"end": 432,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF28"
},
{
"start": 756,
"end": 783,
"text": "(Williams and Zipser, 1989)",
"ref_id": "BIBREF48"
},
{
"start": 867,
"end": 884,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Training and implementation. Parameters for the CNN and RNN layers are initialized using the scheme from (He et al., 2015) . For the embedding and fully-connected layers, we use Chainer's (Tokui et al., 2015) default initialition. We regularize using dropout (Srivastava et al., 2014) , with a ratio of 0.3 over the embedding and LSTM layers (Gal, 2016) , and a weight decay rate of 0.0001. The parameters are optimized using Adam (Kingma and Ba, 2015) , with a starting alpha of 0.001.",
"cite_spans": [
{
"start": 105,
"end": 122,
"text": "(He et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 188,
"end": 208,
"text": "(Tokui et al., 2015)",
"ref_id": "BIBREF43"
},
{
"start": 259,
"end": 284,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF40"
},
{
"start": 342,
"end": 353,
"text": "(Gal, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 431,
"end": 452,
"text": "(Kingma and Ba, 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Following some preliminary experimentation on our development set, we add Gaussian noise with standard deviation of 0.25 to the MFCC features during training, and drop frames with a probability of 0.10. After 20 epochs, we corrupt the true decoder labels by sampling a random output label with a probability of 0.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Our code is implemented in Chainer (Tokui et al., 2015) and is freely available. 4",
"cite_spans": [
{
"start": 35,
"end": 55,
"text": "(Tokui et al., 2015)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Metrics. We report BLEU (Papineni et al., 2002) for all our models. 5 In low-resource settings, BLEU scores tend to be low, difficult to interpret, and poorly correlated with model performance. This is because BLEU requires exact four-gram matches only, but low four-gram accuracy may obscure a high unigram accuracy and inexact translations that partially capture the semantics of an utterance, and these can still be very useful in situations like language documentation and crisis response. Therefore, we also report word-level unigram precision and recall, taking into account stem, synonym, and paraphrase matches. To compute these scores, we use METEOR (Lavie and Agarwal, 2007) with default settings for English and French. 6 For example, METEOR assigns \"eat\" a recall of 1 against reference \"eat\" and a recall of 0.8 against reference \"feed\", which it considers a synonym match.",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF32"
},
{
"start": 659,
"end": 684,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "Naive baselines. We also include evaluation scores for a naive baseline model that predicts the K most frequent words of the training set as a bag of words for each test utterance. We set K to be the value at which precision/recall are most similar, which is always between 5 and 20 words. This provides an empirical lower bound on precision and recall, since we would expect any usable model to outperform a system that does not even depend on the input utterance. We do not compute BLEU for these baselines, since they do not predict sequences, only bags of words. ment data in Table 1 . 7 We denote each ASR model by L-Nh, where L is a language code and N is the size of the training set in hours. For example, en-300h denotes an English ASR model trained on 300 hours of data.",
"cite_spans": [
{
"start": 590,
"end": 591,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 580,
"end": 587,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "Training ASR models for state-of-the-art performance requires substantial hyper-parameter tuning and long training times. Since our goal is simply to see whether pre-training is useful, we stopped pretraining our models after around 30 epochs (3 days) to focus on transfer experiments. As a consequence, our ASR results are far from state-of-the-art: current end-to-end Kaldi systems obtain 16% WER on Switchboard train-dev, and 22.7% WER on the French Globalphone dev set. 8 We believe that better ASR pre-training may produce better ST results, but we leave this for future work.",
"cite_spans": [
{
"start": 474,
"end": 475,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.4"
},
{
"text": "In the following, we denote an ST model by S-T-Nh, where S and T are source and target language codes, and N is the size of the training set in hours. For example, sp-en-20h denotes a Spanish-English ST model trained using 20 hours of data. We use the code mb for Mboshi and fr for French. Figure 2 shows the BLEU and unigram precision/recall scores on the development set for baseline Spanish-English ST models and those trained after initializing with the en-300h model. Corresponding results on the test set (Table 2) reveal very similar patterns. The remainder of our analysis is confined to the development set. The naive baseline, which predicts the 15 most frequent English words in the training set, achieves a precision/recall of around 20%, setting a performance lower bound.",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Spanish-English ST",
"sec_num": "5"
},
{
"text": "Low-resource: 20-50 hours of ST training data. Our baseline ST models substantially improve over previous results (Bansal et al., 2018) using the same train/test splits, primarily due to better regularization and modeling of subwords rather than words. Yet transfer learning still substantially improves over these strong baselines. For sp-en-20h, transfer learning improves dev set BLEU from 10.8 to 19.9, precision from 41% to 51%, and recall from 38% to 49%. For sp-en-50h, transfer learning improves BLEU from 23.3 to 27.8, precision from 54% to 58%, and recall from 51% to 56%.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Bansal et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using English ASR to improve ST",
"sec_num": "5.1"
},
{
"text": "Very low-resource: 10 hours or less of ST training data. Figure 2 shows that without transfer learning, ST models trained on less than 10 hours of data struggle to learn, with precision/recall scores close to or below that of the naive baseline. But with transfer learning, we see gains in precision and recall of between 10 and 20 points.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 65,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Using English ASR to improve ST",
"sec_num": "5.1"
},
{
"text": "We also see that with transfer learning, a model trained on only 5 hours of ST data achieves a BLEU of 9.1, nearly as good as the 10.8 of a model trained on 20 hours of ST data without transfer learning. In other words, fine-tuning an English ASR modelwhich is relatively easy to obtain-produces similar results to training an ST model on four times as N = 0 2.5 5 10 20 50 base 0 2.1 1.8 2.1 10.8 22.7 +asr 0.5 5.7 9.1 14.5 20.2 28.2 much data, which may be difficult to obtain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using English ASR to improve ST",
"sec_num": "5.1"
},
{
"text": "We even find that in the very low-resource setting of just 2.5 hours of ST data, with transfer learning the model achieves a precision/recall of around 30% and improves by more than 10 points over the naive baseline. In very low-resource scenarios with time constraints-such as in disaster relief-it is possible that even this level of performance may be useful, since it can be used to spot keywords in speech and can be trained in just three hours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using English ASR to improve ST",
"sec_num": "5.1"
},
{
"text": "Sample translations. Table 3 shows example translations for models sp-en-20h and sp-en-50h with and without transfer learning using en-300h. Figure 3 shows the attention weights for the last sample utterance in Table 3 . For this utterance, the Spanish and English text have a different word order: mucho tiempo occurs in the middle of the speech utterance, and its translation, long time, is at the end of the English reference. Similarly, vive aqu\u00ed occurs at the end of the speech utterance, while the translation, living here, is in the middle of the English reference. The baseline sp-en-50h model translates the words correctly but doesn't get the English word order right. With transfer learning, the model produces a shorter but still accurate translation in the correct word order.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 141,
"end": 149,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 211,
"end": 218,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Using English ASR to improve ST",
"sec_num": "5.1"
},
{
"text": "To understand the source of these improvements, we carried out a set of ablation experiments. For most of these experiments, we focus on Spanish-English ST with 20 hours of training data, with and without transfer learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Transfer learning with selected parameters. In our first set of experiments, we transferred all parameters of the en-300h model, including the speech encoder CNN and LSTM; the text decoder embedding, LSTM and output layer parameters; and attention parameters. To see which set of parameters has the most impact, we train the sp-en-20h model by transferring only selected parameters from en-300h, and randomly initializing the rest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "The results (Figure 4) show that transferring all Figure 4 : Fisher development set training curves (reported using BLEU) for sp-en-20h using selected parameters from en-300h: none (base); encoder CNN only (+asr:cnn); encoder CNN and LSTM only (+asr:enc); decoder only (+asr:dec); and all: encoder, attention, and decoder (+asr:all). These scores do not use beam search and are therefore lower than the best scores reported in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 22,
"text": "(Figure 4)",
"ref_id": null
},
{
"start": 50,
"end": 58,
"text": "Figure 4",
"ref_id": null
},
{
"start": 427,
"end": 435,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "parameters is most effective, and that the speech encoder parameters account for most of the gains. We hypothesize that the encoder learns transferable low-level acoustic features that normalize across variability like speaker and channel differences to better capture meaningful phonetic differences, and that much of this learning is language-independent. This hypothesis is supported by other work showing the benefits of cross-lingual and multilingual training for speech technology in low-resource target languages (Carlin et al., 2011; Jansen et al., 2010; Deng et al., 2013; Vu et al., 2012; Thomas et al., 2012; Cui et al., 2015; Alum\u00e4e et al., 2016; Renshaw et al., 2015; Hermann and Goldwater, 2018) .",
"cite_spans": [
{
"start": 520,
"end": 541,
"text": "(Carlin et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 542,
"end": 562,
"text": "Jansen et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 563,
"end": 581,
"text": "Deng et al., 2013;",
"ref_id": null
},
{
"start": 582,
"end": 598,
"text": "Vu et al., 2012;",
"ref_id": "BIBREF46"
},
{
"start": 599,
"end": 619,
"text": "Thomas et al., 2012;",
"ref_id": "BIBREF41"
},
{
"start": 620,
"end": 637,
"text": "Cui et al., 2015;",
"ref_id": null
},
{
"start": 638,
"end": 658,
"text": "Alum\u00e4e et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 659,
"end": 680,
"text": "Renshaw et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 681,
"end": 709,
"text": "Hermann and Goldwater, 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "By contrast, transferring only decoder parameters does not improve accuracy. Since decoder parameters help when used in tandem with encoder parameters, we suspect that the dependency in parameter training order might explain this: the transferred decoder parameters have been trained to expect particular input representations from the encoder, so transferring only the decoder parameters without the encoder might not be useful. Figure 4 also suggests that models make strong gains early on in the training when using transfer learning. The sp-en-20h model initialized with all model parameters (+asr:all) from en-300h reaches a higher BLEU score after just 5 epochs (2 hours) of training than the model without transfer learning trained for 60 epochs/20 hours. This again can be useful in disaster-recovery scenarios, where the 0h 100h 300h # English ASR hours data used 0 3 6 9 12 15 18 21 24 27 30 BLEU sp -e n-20 h sp-en-50h Figure 5 : Spanish-to-English BLEU scores on Fisher dev set, with 0h (no transfer learning), 100h and 300h of English ASR data used.",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 438,
"text": "Figure 4",
"ref_id": null
},
{
"start": 843,
"end": 920,
"text": "# English ASR hours data used 0 3 6 9 12 15 18 21 24 27 30 BLEU",
"ref_id": "TABREF0"
},
{
"start": 944,
"end": 952,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "time to deploy a working system must be minimized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Amount of ASR data required. Figure 5 shows the impact of increasing the amount of English ASR data used on Spanish-English ST performance for two models: sp-en-20h and sp-en-50h.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "For sp-en-20h, we see that using en-100h improves performance by almost 6 BLEU points. By using more English ASR training data (en-300h) model, the BLEU score increases by almost 9 points. However, for sp-en-50h, we only see improvements when using en-300h. This implies that transfer learning is most useful when only a few tens of hours of training data are available for ST. As the amount of ST training data increases, the benefits of transfer learning tail off, although it's possible that using even more monolingual data, or improving the training at the ASR step, could extend the benefits to larger ST data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Impact of code-switching. We also tried using the en-300h ASR model without any fine-tuning to translate Spanish audio to English text. This model achieved a BLEU score of 1.1, with a precision of 15 and recall of 21. The non-zero BLEU score indicates that the model is matching some 4-grams in the reference. This seems to be due to code-switching in the Fisher-Spanish speech data set. Looking at the dev set utterances, we find several examples where the Spanish transcriptions match the English translations, indicating that the speaker switched into English. For example, there is an utterance whose Spanish transcription and English translation are both \"right yeah\", and this English expression is indeed present in the source audio. The English ASR model correctly translates this utterance, which is unsurprising since the phrase \"right yeah\" occurs nearly 500 times in Switchboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Overall, we find that in nearly 500 of the 4,000 development set utterances (14%), the Spanish transcription and English translations share more than half of their tokens, indicating likely codeswitching. This suggests that transfer learning from English ASR models might help more than from other languages. To isolate this effect from transfer learning of language-independent speech features, we carried out a further experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.2"
},
{
"text": "Spanish-English ST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "In this experiment, we pre-train using French ASR data for a Spanish-English translation task. Here, we can only transfer the speech encoder parameters, and there should be little if any benefit due to codeswitching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "Because our French data set (20 hours) is much smaller than our English one (300 hours), for a fair comparison we used a 20 hour subset of the English data for pre-training in this experiment. For both the English and French models, we transferred only the encoder parameters. Table 4 shows that both the English and French 20-hour pre-trained models improve performance on Spanish-English ST. The English model works slightly better, as would be predicted given our discussion of code-switching, but the French model is also useful, improving BLEU from 10.8 to 12.5. This result strengthens the claim that ASR pretraining on a completely distinct third language can help low-resource ST. Presumably benefits would be much greater if we used a larger ASR data set, as we did with English above.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "In this experiment, the French pre-trained model used a French BPE output vocabulary, distinct from the English BPE vocabulary used in the ST system. In the future it would be interesting to try combining the French and English text to create a combined output vocabulary, which would allow transferring both the encoder and decoder parameters, and may be useful for translating names or cognates. More generally, it would also be possible to pre-train on multiple languages simultaneously using a shared BPE vocabulary. There is evidence that speech features trained on multiple languages transfer better than those trained on the same amount of data from a single language (Hermann and Goldwater, 2018), so multilingual pretraining for ST could improve results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "baseline +fr-20h +en-20h sp-en-20h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "10.8 12.5 13.2 5.9 23.6 20.9 en-300h 5.3 23.5 22.6 en + fr 7.1 26.7 23.1 Table 5 : Mboshi-to-French translation scores, with and without ASR pre-training. Pr. is the precision, and Rec. the recall score. fr-top-8w and fr-top-10w are naive baselines that, respectively, predict the 8 or 10 most frequent training words. For en + fr, we use encoder parameters from en-300h and attention+decoder parameters from fr-20h",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "6 Mboshi-French ST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "Our final set of experiments test our transfer method on ST for the low-resource language Mboshi, where we have only 4 hours of ST training data: Mboshi speech input paired with French text output. Table 5 shows the ST model scores for Mboshi-French with and without using transfer learning. The first two rows fr-top-8w, fr-top-10w, show precision and recall scores for the naive baselines where we predict the top 8 or 10 most frequent French words in the Mboshi-French training set. These show that a precision/recall in the low 20s is easy to achieve, although with no n-gram matches (0 BLEU). The pre-trained ASR models by themselves (next two lines) are much worse.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "The baseline model trained only on ST data actually has lower precision/recall than the naive baseline, although its non-zero BLEU score indicates that it is able to correctly predict some n-grams. We see comparable precision/recall to the naive baseline with improvements in BLEU by transferring either French ASR parameters (both encoder and decoder, fr-20h) or English ASR parameters (encoder only, en-300h).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "Finally, to achieve the benefits of both the larger training set size for the encoder and the matching language of the decoder, we tried transferring the encoding parameters from the en-300h model and the decoding parameters from the fr-20h model. This configuration (en+fr) gives us the best evaluation scores on all metrics, and highlights the flexibility of our framework. Nevertheless, the 4-hour scenario is clearly a very challenging one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using French ASR to improve",
"sec_num": "5.3"
},
{
"text": "This paper introduced the idea of pre-training an end-to-end speech translation system involving a low-resource language using ASR training data from a higher-resource language. We showed that large gains are possible: for example, we achieved an improvement of 9 BLEU points for a Spanish-English ST model with 20 hours of parallel data and 300 hours of English ASR data. Moreover, the pre-trained model trains faster than the baseline, achieving higher BLEU in only a couple of hours, while the baseline trains for more than a day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We also showed that these methods can be used effectively on a real low-resource language, Mboshi, with only 4 hours of parallel data. The very small size of the data set makes the task challenging, but by combining parameters from an English encoder and French decoder, we outperformed baseline models to obtain a BLEU score of 7.1 and precision/recall of about 25%. We believe ours is the first paper to report word-level BLEU scores on this data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our analysis indicates that, other things being equal, transferring both encoder and decoder parameters works better than just transferring one or the other. However, transferring the encoder parameters is where most of the benefit comes from. Pre-training using a large ASR corpus from a mismatched language will therefore probably work better than using a smaller ASR corpus that matches the output language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our analysis suggests several avenues for further exploration. On the speech side, it might be even more effective to use multilingual training; or to replace the MFCC input features with pre-trained multilingual features, or features that are targeted to low-resource multispeaker settings (Kamper et al., , 2017 Thomas et al., 2012; Cui et al., 2015; Renshaw et al., 2015) . On the language modeling side, simply transferring decoder parameters from an ASR model did not work; it might work better to use pre-trained decoder parameters from a language model, as proposed by Ramachandran et al. (2017) , or shallow fusion (G\u00fcl\u00e7ehre et al., 2015; Toshniwal et al., 2018a) , which interpolates a pre-trained language model during beam search. In these methods, the decoder parameters are independent, and can therefore be used on their own. We plan to explore these strategies in future work.",
"cite_spans": [
{
"start": 291,
"end": 313,
"text": "(Kamper et al., , 2017",
"ref_id": "BIBREF24"
},
{
"start": 314,
"end": 334,
"text": "Thomas et al., 2012;",
"ref_id": "BIBREF41"
},
{
"start": 335,
"end": 352,
"text": "Cui et al., 2015;",
"ref_id": null
},
{
"start": 353,
"end": 374,
"text": "Renshaw et al., 2015)",
"ref_id": "BIBREF37"
},
{
"start": 576,
"end": 602,
"text": "Ramachandran et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 623,
"end": 646,
"text": "(G\u00fcl\u00e7ehre et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 647,
"end": 671,
"text": "Toshniwal et al., 2018a)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Using a shared vocabulary of characters or subwords is an interesting direction for future work, but not explored here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ethnologue.com/language/mdw 3 In preliminary experiments, we did not find much difference between between MFCCs and more raw spectral representations like Mel filterbank features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ASR resultsUsing the experimental setup of Section 3, we pretrained ASR models in English and French, and report their word error rates (WER) on develop-4 github.com/0xSameer/ast 5 We compute BLEU with multi-bleu.pl from the Moses toolkit(Koehn et al., 2007).6 cs.cmu.edu/\u02dcalavie/METEOR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We computed WER with the NIST sclite script.8 These WER results taken from respective Kaldi recipes on GitHub, and may not represent the very best results on these data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable feedback. This work was supported in part by a James S McDonnell Foundation Scholar Award, a Google faculty research award, and NSF grant 1816627. We thank Ida Szubert and Clara Vania for helpful comments on previous drafts of this paper and Antonios Anastasopoulos for tips on experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning a translation model from word lattices",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Adams, Graham Neubig, Trevor Cohn, and Steven Bird. 2016a. Learning a translation model from word lattices. In Proc. Interspeech.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Quoc Truong Do, and Satoshi Nakamura. 2016b. Learning a lexicon and translation model from phoneme lattices",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": null,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Adams, Graham Neubig, Trevor Cohn, Steven Bird, Quoc Truong Do, and Satoshi Nakamura. 2016b. Learning a lexicon and translation model from phoneme lattices. In Proc. EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved multilingual training of stacked neural network acoustic models for low resource languages",
"authors": [
{
"first": "Stavros",
"middle": [],
"last": "Tanel Alum\u00e4e",
"suffix": ""
},
{
"first": "Richard M",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanel Alum\u00e4e, Stavros Tsakalidis, and Richard M Schwartz. 2016. Improved multilingual training of stacked neural network acoustic models for low re- source languages. In Proc. Interspeech.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A case study on using speech-to-translation alignments for language documentation",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and David Chiang. 2017. A case study on using speech-to-translation alignments for language documentation. In Proc. ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tied multitask learning for neural speech translation",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. NAACL HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proc. NAACL HLT.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lowresource speech-to-text translation",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018. Low- resource speech-to-text translation. In Proc. Inter- speech.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "End-to-end automatic speech translation of audiobooks",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Ali",
"middle": [
"Can"
],
"last": "Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Laurent Besacier, Ali Can Ko- cabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In Proc. ICASSP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards speech translation of non written languages",
"authors": [
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurent Besacier, Bowen Zhou, and Yuqing Gao. 2006. Towards speech translation of non written languages. In Proc. SLT.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rapid evaluation of speech representations for spoken term discovery",
"authors": [
{
"first": "A",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Carlin",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky. 2011. Rapid evaluation of speech representations for spoken term discovery. In Proc. Interspeech.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multilingual representations for low resource speech recognition and keyword search",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Picheny",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Picheny, et al. 2015. Multilingual repre- sentations for low resource speech recognition and keyword search. In Proc. ASRU.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Yifan Gong, and Alex Acero. 2013. Recent advances in deep learning for speech research at Microsoft",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jinyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jui-Ting",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Seltzer",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": null,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Deng, Jinyu Li, Jui-Ting Huang, Kaisheng Yao, Dong Yu, Frank Seide, Mike Seltzer, Geoff Zweig, Xiaodong He, Jason Williams, Yifan Gong, and Alex Acero. 2013. Recent advances in deep learning for speech research at Microsoft. In Proc. ICASSP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Proc. NIPS.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A very low resource language speech corpus for computational language documentation experiments",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Godard",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Adda",
"suffix": ""
},
{
"first": "Martine",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Benjumea",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Jamison",
"middle": [],
"last": "Cooper-Leavitt",
"suffix": ""
},
{
"first": "Guy-No\u00ebl",
"middle": [],
"last": "Kouarata",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Lamel",
"suffix": ""
},
{
"first": "H\u00e9l\u00e8ne",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Rialland",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Godard, Gilles Adda, Martine Adda-Decker, Juan Benjumea, Laurent Besacier, Jamison Cooper- Leavitt, Guy-No\u00ebl Kouarata, Lori Lamel, H\u00e9l\u00e8ne Maynard, Markus M\u00fcller, Annie Rialland, Sebastian St\u00fcker, Fran\u00e7ois Yvon, and Marcely Zanon Boito. 2018. A very low resource language speech cor- pus for computational language documentation ex- periments. In Proc. LREC.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Switchboard-1 Release 2 (LDC97S62)",
"authors": [
{
"first": "John",
"middle": [],
"last": "Godfrey",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Holliman",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Godfrey and Edward Holliman. 1993. Switchboard-1 Release 2 (LDC97S62). https: //catalog.ldc.upenn.edu/ldc97s62.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On using monolingual corpora in neural machine translation",
"authors": [
{
"first": "Caglar",
"middle": [],
"last": "G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Lo\u0131c",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Huei-Chi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caglar G\u00fcl\u00e7ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo\u0131c Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. CoRR, abs/1503.03535.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpass- ing human-level performance on ImageNet classifi- cation. In Proc. ICCV.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multilingual bottleneck features for subword modeling in zero-resource languages",
"authors": [
{
"first": "Enno",
"middle": [],
"last": "Hermann",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enno Hermann and Sharon Goldwater. 2018. Multi- lingual bottleneck features for subword modeling in zero-resource languages. In Proc. Interspeech.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Towards spoken term discovery at scale with zero resources",
"authors": [
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aren Jansen, Kenneth Church, and Hynek Hermansky. 2010. Towards spoken term discovery at scale with zero resources. In Proc. Interspeech.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Trans. ACL",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Gre- gory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine transla- tion system: Enabling zero-shot translation. Trans. ACL, 5:339-351.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Unsupervised neural network based feature extraction using weak top-down constraints",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Micha Elsner, Aren Jansen, and Sharon Goldwater. 2015. Unsupervised neural net- work based feature extraction using weak top-down constraints. In Proc. ICASSP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A segmental framework for fullyunsupervised large-vocabulary speech recognition",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2017,
"venue": "Comput. Speech Lang",
"volume": "46",
"issue": "",
"pages": "154--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Kamper, Aren Jansen, and Sharon Gold- water. 2017. A segmental framework for fully- unsupervised large-vocabulary speech recognition. Comput. Speech Lang., 46:154-174.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high lev- els of correlation with human judgments. In Proc. WMT.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Proc. EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Utterance classification in speech-to-speech translation for zero-resource languages in the hospital administration domain",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lara",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Sai",
"middle": [],
"last": "Wilkinson",
"suffix": ""
},
{
"first": "Vivian",
"middle": [],
"last": "Sumanth Miryala",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Robison",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lara J Martin, Andrew Wilkinson, Sai Sumanth Miryala, Vivian Robison, and Alan W Black. 2015. Utterance classification in speech-to-speech transla- tion for zero-resource languages in the hospital ad- ministration domain. In Proc. ASRU.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Crowdsourced translation for emergency response in Haiti: The global collaboration of local knowledge",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Munro",
"suffix": ""
}
],
"year": 2010,
"venue": "AMTA Workshop Collaborative Crowdsourcing Transl",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Munro. 2010. Crowdsourced translation for emergency response in Haiti: The global collabora- tion of local knowledge. In AMTA Workshop Collab- orative Crowdsourcing Transl.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Rectified linear units improve restricted Boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proc. ICML.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proc. ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improved speech-to-text translation with the Fisher and Callhome Spanish-English speech translation corpus",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khu- danpur. 2013. Improved speech-to-text transla- tion with the Fisher and Callhome Spanish-English speech translation corpus. In Proc. IWSLT.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The Kaldi Speech Recognition Toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "Silovsky, Georg Stemmer, and Karel Vesely",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi Speech Recognition Toolkit. In Proc. ASRU.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Unsupervised pretraining for sequence to sequence learning",
"authors": [
{
"first": "Prajit",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prajit Ramachandran, Peter J Liu, and Quoc V Le. 2017. Unsupervised pretraining for sequence to se- quence learning. In Proc. EMNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A comparison of neural network methods for unsupervised representation learning on the zero resource speech challenge",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Renshaw",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Renshaw, Herman Kamper, Aren Jansen, and Sharon Goldwater. 2015. A comparison of neu- ral network methods for unsupervised representation learning on the zero resource speech challenge. In Proc. Interspeech.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Globalphone: a multilingual speech and text database developed at karlsruhe university",
"authors": [
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2002,
"venue": "Seventh International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanja Schultz. 2002. Globalphone: a multilingual speech and text database developed at karlsruhe uni- versity. In Seventh International Conference on Spo- ken Language Processing.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Mach. Learn. Res",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Multilingual mlp features for lowresource LVCSR systems",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Sriram",
"middle": [],
"last": "Ganapathy",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Thomas, Sriram Ganapathy, and Hynek Her- mansky. 2012. Multilingual mlp features for low- resource LVCSR systems. In Proc. ICASSP.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Is learning the n-th thing any easier than learning the first?",
"authors": [
{
"first": "",
"middle": [],
"last": "Sebastian Thrun",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Thrun. 1995. Is learning the n-th thing any easier than learning the first? In Proc. NIPS.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Chainer: A next-generation open source framework for deep learning",
"authors": [
{
"first": "Seiya",
"middle": [],
"last": "Tokui",
"suffix": ""
},
{
"first": "Kenta",
"middle": [],
"last": "Oono",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Hido",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Clayton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Learn-ingSys",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: A next-generation open source framework for deep learning. In Proc. Learn- ingSys.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A Comparison of Techniques for Language Model Integration in Encoder-Decoder Speech Recognition",
"authors": [
{
"first": "Shubham",
"middle": [],
"last": "Toshniwal",
"suffix": ""
},
{
"first": "Anjuli",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shubham Toshniwal, Anjuli Kannan, Chung-Cheng Chiu, Yonghui Wu, Tara N Sainath, and Karen Livescu. 2018a. A Comparison of Techniques for Language Model Integration in Encoder-Decoder Speech Recognition. In Proc. SLT.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Multilingual Speech Recognition with A Single End-To-End Model",
"authors": [
{
"first": "Shubham",
"middle": [],
"last": "Toshniwal",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Weinstein",
"suffix": ""
},
{
"first": "Kanishka",
"middle": [],
"last": "Rao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, and Kan- ishka Rao. 2018b. Multilingual Speech Recogni- tion with A Single End-To-End Model. In Proc. ICASSP.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "An investigation on initialization schemes for multilayer perceptron training using multilingual data and their effect on ASR performance",
"authors": [
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Wojtek",
"middle": [],
"last": "Breiter",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ngoc Thang Vu, Wojtek Breiter, Florian Metze, and Tanja Schultz. 2012. An investigation on initializa- tion schemes for multilayer perceptron training us- ing multilingual data and their effect on ASR perfor- mance. In Proc. Interspeech.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Sequence-tosequence models can directly transcribe foreign speech",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ron",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to- sequence models can directly transcribe foreign speech. In Proc. Interspeech.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural Comput",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. Neural Comput.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neu- ral machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Learning neural network representations using cross-lingual bottleneck features with word-pair information",
"authors": [
{
"first": "Yougen",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Cheung-Chi",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yougen Yuan, Cheung-Chi Leung, Lei Xie, Bin Ma, and Haizhou Li. 2016. Learning neural network rep- resentations using cross-lingual bottleneck features with word-pair information. In Proc. Interspeech.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proc. EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Encoder-decoder with attention model architecture for both ASR and ST. The encoder input is the Spanish speech utterance claro, translated as clearly, represented as BPE (subword) units.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "(top) BLEU and (bottom) Unigram precision/recall for Spanish-English ST models computed on Fisher dev set. base indicates no transfer learning; +asr are models trained by fine-tuning en-300h model parameters. naive baseline indicates the score when we predict the 15 most frequent English words in the training set.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Attention plots for the final example in Table 3, using 50h models with and without pre-training. The x-axis shows the reference Spanish word positions in the input; the y-axis shows the predicted English subwords. In the reference, mucho tiempo is translated to long time, and vive aqu\u00ed to living here, but their order is reversed, and this is reflected in (b).",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"text": "Word Error Rate (WER, in %) for the ASR models used as pretraining, computed on Switchboard train-dev for English and Globalphone dev for French.",
"content": "<table><tr><td/><td colspan=\"3\">en-100h en-300h fr-20h</td></tr><tr><td>WER</td><td>35.4</td><td>27.3</td><td>29.6</td></tr></table>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"text": "BLEU scores for Spanish-English ST on the Fisher test set, using N hours of training data. base: no transfer learning. +asr: using model parameters from English ASR (en-300h).",
"content": "<table><tr><td colspan=\"2\">Spanish super caliente pero muy bonito</td></tr><tr><td colspan=\"2\">English super hot but very nice</td></tr><tr><td>20h</td><td>you support it but it was very nice</td></tr><tr><td colspan=\"2\">20h+asr you can get alright but it's very nice</td></tr><tr><td>50h</td><td>super expensive but very nice</td></tr><tr><td colspan=\"2\">50h+asr super hot but it's very nice</td></tr><tr><td colspan=\"2\">Spanish s\u00ed y usted hace mucho tiempo que que vive aqu\u00ed</td></tr><tr><td colspan=\"2\">English yes and have you been living here a long time</td></tr><tr><td>20h</td><td>yes i've been a long time what did you come here</td></tr><tr><td colspan=\"2\">20h+asr yes and you have a long time that you live here</td></tr><tr><td>50h</td><td>yes you are a long time that you live here</td></tr><tr><td colspan=\"2\">50h+asr yes and have you been here long</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "Example translations on selected sentences from the Fisher development set, with stem-level ngram matches to the reference sentence underlined. 20h and 50h are Spanish-English models without pretraining; 20h+asr and 50h+asr are pre-trained on 300 hours of English ASR.",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"text": "Fisher dev set BLEU scores for sp-en-20h. baseline: model without transfer learning. Last two columns: Using encoder parameters from French ASR (+fr-20h), and English ASR (+en-20h).",
"content": "<table><tr><td>model</td><td colspan=\"2\">pretrain BLEU</td><td colspan=\"2\">Pr. Rec.</td></tr><tr><td>fr-top-8w</td><td>-</td><td colspan=\"3\">0 23.5 22.2</td></tr><tr><td>fr-top-10w</td><td>-</td><td colspan=\"3\">0 20.6 24.5</td></tr><tr><td>en-300h</td><td>-</td><td>0</td><td>0.2</td><td>5.7</td></tr><tr><td>fr-20h</td><td>-</td><td>0</td><td>4.1</td><td>3.2</td></tr><tr><td/><td>-</td><td colspan=\"3\">3.5 18.6 19.4</td></tr><tr><td>mb-fr-4h</td><td>fr-20h</td><td/><td/><td/></tr></table>"
}
}
}
}