ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-demos.34.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:44:11.969320Z"
},
"title": "ESPnet-ST: All-in-One Speech Translation Toolkit",
"authors": [
{
"first": "Hirofumi",
"middle": [],
"last": "Inaguma",
"suffix": "",
"affiliation": {},
"email": "inaguma@sap.ist.i.kyoto-u.ac.jp"
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Shigeki",
"middle": [],
"last": "Karita",
"suffix": "",
"affiliation": {
"laboratory": "NTT Communication Science Laboratories",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Yalta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {}
},
"email": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Hayashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nagoya University",
"location": {}
},
"email": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Kyoto",
"middle": [],
"last": "University",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Riken",
"middle": [],
"last": "Aip",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present ESPnet-ST, which is designed for the quick development of speech-to-speech translation systems in a single framework. ESPnet-ST is a new project inside end-toend speech processing toolkit, ESPnet, which integrates or newly implements automatic speech recognition, machine translation, and text-to-speech functions for speech translation. We provide all-in-one recipes including data pre-processing, feature extraction, training, and decoding pipelines for a wide range of benchmark datasets. Our reproducible results can match or even outperform the current state-of-the-art performances; these pretrained models are downloadable. The toolkit is publicly available at https://github. com/espnet/espnet.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present ESPnet-ST, which is designed for the quick development of speech-to-speech translation systems in a single framework. ESPnet-ST is a new project inside end-toend speech processing toolkit, ESPnet, which integrates or newly implements automatic speech recognition, machine translation, and text-to-speech functions for speech translation. We provide all-in-one recipes including data pre-processing, feature extraction, training, and decoding pipelines for a wide range of benchmark datasets. Our reproducible results can match or even outperform the current state-of-the-art performances; these pretrained models are downloadable. The toolkit is publicly available at https://github. com/espnet/espnet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech translation (ST), where converting speech signals in a language to text in another language, is a key technique to break the language barrier for human communication. Traditional ST systems involve cascading automatic speech recognition (ASR), text normalization (e.g., punctuation insertion, case restoration), and machine translation (MT) modules; we call this Cascade-ST (Ney, 1999; Casacuberta et al., 2008; Kumar et al., 2014) . Recently, sequence-to-sequence (S2S) models have become the method of choice in implementing both the ASR and MT modules (c.f. (Chan et al., 2016; Bahdanau et al., 2015) ). This convergence of models has opened up the possibility of designing end-to-end speech translation (E2E-ST) systems, where a single S2S directly maps speech in a source language to its translation in the target language (B\u00e9rard et al., 2016; .",
"cite_spans": [
{
"start": 381,
"end": 392,
"text": "(Ney, 1999;",
"ref_id": "BIBREF30"
},
{
"start": 393,
"end": 418,
"text": "Casacuberta et al., 2008;",
"ref_id": "BIBREF8"
},
{
"start": 419,
"end": 438,
"text": "Kumar et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 568,
"end": 587,
"text": "(Chan et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 588,
"end": 610,
"text": "Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 835,
"end": 856,
"text": "(B\u00e9rard et al., 2016;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "E2E-ST has several advantages over the cascaded approach: (1) a single E2E-ST model can reduce latency at inference time, which is useful for time-critical use cases like simultaneous interpretation. (2) A single model enables back-propagation training in an end-to-end fashion, which mitigates the risk of error propagation by cascaded modules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) In certain use cases such as endangered language documentation (Bird et al., 2014) , source speech and target text translation (without the intermediate source text transcript) might be easier to obtain, necessitating the adoption of E2E-ST models (Anastasopoulos and Chiang, 2018) . Nevertheless, the verdict is still out on the comparison of translation quality between E2E-ST and Cascade-ST. Some empirical results favor E2E while others favor Cascade ; the conclusion also depends on the nuances of the training data condition (Sperber et al., 2019) .",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "(Bird et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 252,
"end": 285,
"text": "(Anastasopoulos and Chiang, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 535,
"end": 557,
"text": "(Sperber et al., 2019)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We believe the time is ripe to develop a unified toolkit that facilitates research in both E2E and cascaded approaches. We present ESPnet-ST, a toolkit that implements many of the recent models for E2E-ST, as well as the ASR and MT modules for Cascade-ST. Our goal is to provide a toolkit where researchers can easily incorporate and test new ideas under different approaches. Recent research suggests that pre-training, multi-task learning, and transfer learning are important techniques for achieving improved results for E2E-ST (B\u00e9rard et al., 2018; Anastasopoulos and Chiang, 2018; Bansal et al., 2019; . Thus, a unified toolkit that enables researchers to seamlessly mix-and-match different ASR and MT models in training both E2E-ST and Cascade-ST systems would facilitate research in the field. 1 ESPnet-ST is especially designed to target the ST task. ESPnet was originally developed for the 1 There exist many excellent toolkits that support both ASR and MT tasks (see Table 1 ). However, it is not always straightforward to use them for E2E-ST and Cascade-ST, due to incompatible training/inference pipelines in different modules or lack of detailed preprocessing/training scripts.",
"cite_spans": [
{
"start": 531,
"end": 552,
"text": "(B\u00e9rard et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 553,
"end": 585,
"text": "Anastasopoulos and Chiang, 2018;",
"ref_id": "BIBREF0"
},
{
"start": 586,
"end": 606,
"text": "Bansal et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 801,
"end": 802,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 977,
"end": 984,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Example (w/ corpus pre-processing) Pre-trained model ASR LM E2E- Cascade-MT TTS ASR LM E2E-Cascade-MT TTS ST ST ST ASR task (Watanabe et al., 2018) , and recently extended to the text-to-speech (TTS) task (Hayashi et al., 2020) . Here, we extend ESPnet to ST tasks, providing code for building translation systems and recipes (i.e., scripts that encapsulate the entire training/inference procedure for reproducibility purposes) for a wide range of ST benchmarks. This is a non-trivial extension: with a unified codebase for ASR/MT/ST and a wide range of recipes, we believe ESPnet-ST is an all-in-one toolkit that should make it easier for both ASR and MT researchers to get started in ST research. The contributions of ESPnet-ST are as follows:",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Watanabe et al., 2018)",
"ref_id": "BIBREF51"
},
{
"start": 209,
"end": 231,
"text": "(Hayashi et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 65,
"end": 118,
"text": "Cascade-MT TTS ASR LM E2E-Cascade-MT TTS ST ST ST",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Supported task",
"sec_num": null
},
{
"text": "ST ESPnet-ST (ours) Lingvo 1 \u2663 \u2663 \u2663 - - - - OpenSeq2seq 2 - - - - - NeMo 3 - - - - - RETURNN 4 - - - - - - - - SLT.KIT 5 - - - - Fairseq 6 - - - - - - Tensor2Tensor 7 - - - - - - - - \u2666 OpenNMT-{py, tf} 8 - - - - - - - - - Kaldi 9 - - - - - - - - Wav2letter++ 10 - - - - - - - -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supported task",
"sec_num": null
},
{
"text": "\u2022 To the best of our knowledge, this is the first toolkit to include ASR, MT, TTS, and ST recipes and models in the same codebase. Since our codebase is based on the unified framework with a common stage-by-stage processing (Povey et al., 2011) , it is very easy to customize training data and models.",
"cite_spans": [
{
"start": 224,
"end": 244,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supported task",
"sec_num": null
},
{
"text": "\u2022 We provide recipes for ST corpora such as Fisher-CallHome (Post et al., 2013) , Libri-trans (Kocabiyikoglu et al., 2018), How2 (Sanabria et al., 2018) , and Must-C (Di Gangi et al., 2019a) 2 . Each recipe contains a single script (run.sh), which covers all experimental processes, such as corpus preparation, data augmentations, and transfer learning.",
"cite_spans": [
{
"start": 60,
"end": 79,
"text": "(Post et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 129,
"end": 152,
"text": "(Sanabria et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supported task",
"sec_num": null
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supported task",
"sec_num": null
},
{
"text": "We provide the open-sourced toolkit and the pre-trained models whose hyper-parameters are intensively tuned. Moreover, we provide interactive demo of speech-to-speech translation hosted by Google Colab. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supported task",
"sec_num": null
},
{
"text": "All required tools are automatically downloaded and built under tools (see Figure 1 ) by a make command. The tools include (1) neural network libraries such as PyTorch (Paszke et al., 2019) , (2) ASR-related toolkits such as Kaldi (Povey et al., 2011) , and (3) MT-related toolkits such as Moses (Koehn et al., 2007) and sentencepiece (Kudo, 2018) . ESPnet-ST is implemented with Pytorch backend.",
"cite_spans": [
{
"start": 168,
"end": 189,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 231,
"end": 251,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF39"
},
{
"start": 296,
"end": 316,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF22"
},
{
"start": 335,
"end": 347,
"text": "(Kudo, 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Installation",
"sec_num": "2.1"
},
{
"text": "We provide various recipes for all tasks in order to quickly and easily reproduce the strong baseline systems with a single script. The directory structure is depicted as in Figure 1 . egs contains corpus directories, in which the corresponding task directories (e.g., st1) are included. To run experiments, we simply execute run.sh under the desired task directory. Configuration yaml files for feature extraction, data augmentation, model training, and decoding etc. are included in conf. Model directories including checkpoints are saved under exp. More details are described in Section 2.4. ",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 182,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Recipes for reproducible experiments",
"sec_num": "2.2"
},
{
"text": "430(+*$%#&'(\") E&$8%79+ 5&#$%&'()'*\"+,&' !85G4# %&G&9%&9* G58$&##79+ !85G4# %&G&9%&9* G58$&##79+ !85G4# %&G&9%&9* G58$&##79+ .'# )6) .3 37) )6) 43",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes for reproducible experiments",
"sec_num": "2.2"
},
{
"text": "Figure 2: All-in-one process pipelines in ESPnet-ST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recipes for reproducible experiments",
"sec_num": "2.2"
},
{
"text": "We support language modeling (LM), neural textto-speech (TTS) in addition to ASR, ST, and MT tasks. To the best of our knowledge, none of frameworks support all these tasks in a single toolkit. A comparison with other frameworks are summarized in Table 1 . Conceptually, it is possible to combine ASR and MT modules for Cascade-ST, but few frameworks provide such examples. Moreover, though some toolkits indeed support speechto-text tasks, it is not trivial to switch ASR and E2E-ST tasks since E2E-ST requires the auxiliary tasks (ASR/MT objectives) to achieve reasonable performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Tasks",
"sec_num": "2.3"
},
{
"text": "ESPnet-ST is based on a stage-by-stage processing including corpus-dependent pre-processing, feature extraction, training, and decoding stages. We follow Kaldi-style data preparation, which makes it easy to augment speech data by leveraging other data resources prepared in egs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Once run.sh is executed, the following processes are started.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 0: Corpus-dependent pre-processing is conducted using scripts under local and the resulting text data is automatically saved under data. Both transcriptions and the corresponding translations with three different treatments of casing and punctuation marks (hereafter, punct.) are generated after text normalization and tokenization with tokenizer.perl in Moses; (a) tc: truecased text with punct., (b) lc: lowercased text with punct., and (3) lc.rm: lowercased text without punct. except for apostrophe. lc.rm is designed for the ASR task since the conventional ASR system does not generate punctuation marks. However, it is possible to train ASR models so as to generate truecased text using tc. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 1: Speech feature extraction based on Kaldi and our own implementations is performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 2: Dataset JSON files in a format ingestable by ESPnet's Pytorch back-end (containing token/utterance/speaker/language IDs, input and output sequence lengths, transcriptions, and translations) are dumped under dump.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 3: (ASR recipe only) LM is trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 4: Model training (RNN/Transformer) is performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 5: Model averaging, beam search decoding, and score calculation are conducted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "Stage 6: (Cascade-ST recipe only) The system is evaluated by feeding ASR outputs to the MT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage-by-stage processing",
"sec_num": "2.4"
},
{
"text": "In ST literature, it is acknowledged that the optimization of E2E-ST is more difficult than individually training ASR and MT models. Multitask training (MTL) and transfer learning from ASR and MT tasks are promising approaches for this problem B\u00e9rard et al., 2018; Sperber et al., 2019; Bansal et al., 2019) . Thus, in Stage 4 of the E2E-ST recipe, we allow options to add auxiliary ASR and MT objectives. We also support options to initialize the parameters of the ST encoder with a pre-trained ASR encoder in asr1, and to initialize the parameters of the ST decoder with a pre-trained MT decoder in mt1.",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "B\u00e9rard et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 265,
"end": 286,
"text": "Sperber et al., 2019;",
"ref_id": "BIBREF47"
},
{
"start": 287,
"end": 307,
"text": "Bansal et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning and transfer learning",
"sec_num": "2.5"
},
{
"text": "We implement techniques that have shown to give improved robustness in the ASR component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech data augmentation",
"sec_num": "2.6"
},
{
"text": "We augmented speech data by changing the speed with factors of 0.9, 1.0, and 1.1, which results in 3-fold data augmentation. We found this is important to stabilize E2E-ST training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speed perturbation",
"sec_num": null
},
{
"text": "SpecAugment Time and frequency masking blocks are randomly applied to log mel-filterbank features. This has been originally proposed to improve the ASR performance and shown to be effective for E2E-ST as well (Bahar et al., 2019b) .",
"cite_spans": [
{
"start": 209,
"end": 230,
"text": "(Bahar et al., 2019b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speed perturbation",
"sec_num": null
},
{
"text": "Multilingual training, where datasets from different language pairs are combined to train a single model, is a potential way to improve performance of E2E-ST models Di Gangi et al., 2019c) . Multilingual E2E-ST/MT models are supported in several recipes.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "Di Gangi et al., 2019c)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual training",
"sec_num": "2.7"
},
{
"text": "Experiment manager We customize the data loader, trainer, and evaluator by overriding Chainer (Tokui et al., 2019) modules. The common processes are shared among all tasks.",
"cite_spans": [
{
"start": 94,
"end": 114,
"text": "(Tokui et al., 2019)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.8"
},
{
"text": "Large-scale training/decoding We support job schedulers (e.g., SLURM, Grid Engine), multiple GPUs and half/mixed-precision training/decoding with apex (Micikevicius et al., 2018) . 5 Our beam search implementation vectorizes hypotheses for faster decoding (Seki et al., 2019) .",
"cite_spans": [
{
"start": 151,
"end": 178,
"text": "(Micikevicius et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 181,
"end": 182,
"text": "5",
"ref_id": null
},
{
"start": 256,
"end": 275,
"text": "(Seki et al., 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.8"
},
{
"text": "Performance monitoring Attention weights and all kinds of training/validation scores and losses for ASR, MT, and ST tasks can be collectively monitored through TensorBoard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.8"
},
{
"text": "Ensemble decoding Averaging posterior probabilities from multiple models during beam search decoding is supported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional features",
"sec_num": "2.8"
},
{
"text": "To give a flavor of the models that are supported with ESPnet-ST, we describe in detail the construction of an example E2E-ST model, which is used later in the Experiments section. Note that there are many customizable options not mentioned here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Models",
"sec_num": "3"
},
{
"text": "5 https://github.com/NVIDIA/apex Automatic speech recognition (ASR) We build ASR components with the Transformer-based hybrid CTC/attention framework (Watanabe et al., 2017) , which has been shown to be more effective than RNN-based models on various speech corpora (Karita et al., 2019) . Decoding with the external LSTM-based LM trained in the Stage 3 is also conducted (Kannan et al., 2017) . The transformer uses 12 self-attention blocks stacked on the two VGG blocks in the speech encoder and 6 self-attention blocks in the transcription decoder; see (Karita et al., 2019) for implementation details.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Watanabe et al., 2017)",
"ref_id": "BIBREF52"
},
{
"start": 266,
"end": 287,
"text": "(Karita et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 372,
"end": 393,
"text": "(Kannan et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 556,
"end": 577,
"text": "(Karita et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Models",
"sec_num": "3"
},
{
"text": "Machine translation (MT) The MT model consists of the source text encoder and translation decoder, implemented as a transformer with 6 selfattention blocks. For simplicity, we train the MT model by feeding lowercased source sentences without punctuation marks (lc.rm) (Peitz et al., 2011) . There are options to explore characters and different subword units in the MT component.",
"cite_spans": [
{
"start": 268,
"end": 288,
"text": "(Peitz et al., 2011)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Models",
"sec_num": "3"
},
{
"text": "End-to-end speech translation (E2E-ST) Our E2E-ST model is composed of the speech encoder and translation decoder. Since the definition of parameter names is exactly same as in the ASR and MT components, it is quite easy to copy parameters from the pre-trained models for transfer learning. After ASR and MT models are trained as described above, their parameters are extracted and used to initialize the E2E-ST model. The model is then trained on ST data, with the option of incorporating multi-task objectives as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Models",
"sec_num": "3"
},
{
"text": "Text-to-speech (TTS) We also support end-toend text-to-speech (E2E-TTS), which can be applied after ST outputs a translation. The E2E-TTS model consists of the feature generation network converting an input text to acoustic features (e.g., log-mel filterbank coefficients) and the vocoder network converting the features to a waveform. Tacotron 2 (Shen et al., 2018) , Transformer-TTS , FastSpeech (Ren et al., 2019) , and their variants such as a multi-speaker model are supported as the feature generation network. WaveNet (van den Oord et al., 2016) and Parallel WaveGAN are available as the vocoder network. See Hayashi et al. (2020) for more details.",
"cite_spans": [
{
"start": 347,
"end": 366,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 398,
"end": 416,
"text": "(Ren et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 525,
"end": 552,
"text": "(van den Oord et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 616,
"end": 637,
"text": "Hayashi et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example Models",
"sec_num": "3"
},
{
"text": "In this section, we demonstrate how models from our ESPnet recipes perform on benchmark speech 115.53 + MT decoder init. 216.22 + SpecAugment 316.70 + Ensemble 3 models 1 + 2 + 317.40",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Transformer ASR \u2192 Transformer MT 1 17.85 ESPnet-ST Transformer ASR \u2666 \u2192 Transformer MT 16.96 Table 3 : BLEU of ST systems on Libri-trans corpus. \u2663 Implemented w/ ESPnet. Pre-training. \u2666 w/ SpecAugment. 1 (Liu et al., 2019) 2 (Bahar et al., 2019a) 3 (Bahar et al., 2019b) 4 (Wang et al., 2020) translation corpora: Fisher-CallHome Spanish En\u2192Es, Libri-trans En\u2192Fr, How2 En\u2192Pt, and Must-C En\u21928 languages. Moreover, we also performed experiments on IWSLT16 En-De to validate the performance of our MT modules. All sentences were tokenized with the tokenizer.perl script in the Moses toolkit (Koehn et al., 2007) . We used the joint source and target vocabularies based on byte pair encoding (BPE) (Sennrich et al., 2016) units. ASR vocabularies were created with English sentences only with lc.rm. We report 4-gram BLEU (Papineni et al., 2002) scores with the multi-bleu.perl script in Moses. For speech features, we extracted 80-channel log-mel filterbank coefficients with 3-dimensional pitch features using Kaldi, resulting 83-dimensional features per frame. Detailed training and decoding configura- 245.63 + SpecAugment 345.68 + Ensemble 3 models ( 1 + 2 + 3 ) 48.04",
"cite_spans": [
{
"start": 272,
"end": 291,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF50"
},
{
"start": 587,
"end": 607,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF22"
},
{
"start": 693,
"end": 716,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 816,
"end": 839,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cascade",
"sec_num": null
},
{
"text": "ESPnet-ST Transformer ASR \u2192 Transformer MT 44.90 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cascade",
"sec_num": null
},
{
"text": "Fisher-CallHome Spanish corpus contains 170hours of Spanish conversational telephone speech, the corresponding transcription, as well as the English translations (Post et al., 2013) . All punctuation marks except for apostrophe were removed (Post et al., 2013; Kumar et al., 2014; . We report case-insensitive BLEU on Fisher-{dev, dev2, test} (with four references), and CallHome-{devtest, evltest} (with a single reference). We used 1k vocabulary for all tasks. Results are shown in Table 2 . It is worth noting that we did not use any additional data resource. Both MTL and transfer learning improved the performance of vanilla Transformer. Our best system with SpecAugment matches the current state-ofthe-art performance . Moreover, the total training/inference time is much shorter since our E2E-ST models are based on the BPE1k unit rather than characters. 6",
"cite_spans": [
{
"start": 162,
"end": 181,
"text": "(Post et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 241,
"end": 260,
"text": "(Post et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 261,
"end": 280,
"text": "Kumar et al., 2014;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 484,
"end": 491,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Fisher-CallHome Spanish (Es\u2192En)",
"sec_num": "4.1"
},
{
"text": "De",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Pt Fr Es Ro Ru Nl It E2E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Transformer + ASR encoder init. Libri-trans corpus contains 236-hours of English read speech, the corresponding transcription, and the French translations . We used the clean 100-hours of speech data and augmented translation references with Google Translate for the training set (B\u00e9rard et al., 2018; Bahar et al., 2019a,b) . We report case-insensitive BLEU on the test set. We used 1k vocabulary for all tasks. Results are shown in Table 3 . Note that all models used the same data resource and are competitive to previous work.",
"cite_spans": [
{
"start": 280,
"end": 301,
"text": "(B\u00e9rard et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 302,
"end": 324,
"text": "Bahar et al., 2019a,b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 434,
"end": 441,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "How2 corpus contains English speech extracted from YouTube videos, the corresponding transcription, as well as the Portuguese translation (Sanabria et al., 2018) . We used the official 300-hour subset for training. Since speech features in the How2 corpus is pre-processed as 40-channel log-mel filterbank coefficients with 3-dimensional pitch features with Kaldi in advance, we used them without speed perturbation. We used 5k and 8k vocabularies for ASR and E2E-ST/MT models, respectively. We report case-sensitive BLEU on the dev5 set.",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Sanabria et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "How2 (En\u2192 Pt)",
"sec_num": "4.3"
},
{
"text": "Results are shown in Table 4 . Our systems significantly outperform the previous RNN-based model (Sanabria et al., 2018) . We believe that our systems can be regarded as the reliable baselines for future research.",
"cite_spans": [
{
"start": 97,
"end": 120,
"text": "(Sanabria et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "How2 (En\u2192 Pt)",
"sec_num": "4.3"
},
{
"text": "weeks with 16 GPUs, while ESPnet-ST requires just 1-2 days with a single GPU. The fast inference of ESPnet-ST can be confirmed in our interactive demo page (RTF 0.7755).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How2 (En\u2192 Pt)",
"sec_num": "4.3"
},
{
"text": "Must-C corpus contains English speech extracted from TED talks, the corresponding transcription, and the target translations in 8 language directions (De, Pt, Fr, Es, Ro, Ru, Nl, and It) (Di Gangi et al., 2019a) . We conducted experiments in all 8 directions. We used 5k and 8k vocabularies for ASR and E2E-ST/MT models, respectively. We report case-sensitive BLEU on the tst-COMMON set.",
"cite_spans": [
{
"start": 150,
"end": 211,
"text": "(De, Pt, Fr, Es, Ro, Ru, Nl, and It) (Di Gangi et al., 2019a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Must-C (En\u2192 8 langs)",
"sec_num": "4.4"
},
{
"text": "Results are shown in Table 5 . Our systems outperformed the previous work (Di Gangi et al., 2019b) implemented with the custermized Fairseq 7 with a large margin.",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "Gangi et al., 2019b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Must-C (En\u2192 8 langs)",
"sec_num": "4.4"
},
{
"text": "IWSLT evaluation campaign dataset (Cettolo et al., 2012) is the origin of the dataset for our MT experiments. We used En-De language pair. Specifically, IWSLT 2016 training set for training data, test2012 as the development data, and test2013 and test2014 sets as our test data respectively.",
"cite_spans": [
{
"start": 34,
"end": 56,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MT experiment: IWSLT16 En \u2194 De",
"sec_num": "4.5"
},
{
"text": "We compare the performance of Transformer model in ESPnet-ST with that of Fairseq in Table 6. ESPnet-ST achieves the performance almost comparable to the Fairseq. We assume that the performance gap is due to the minor difference in the implementation of two frameworks. Also, we carefully tuned the hyper-parameters for the MT task in the small ST corpora, which is confirmed from the reasonable performances of our Cascaded-ST systems. It is acknowledged that Transformer model is extremely sensitive to the hyper-parameters such as the learning rate and the number of warmup steps (Popel and Bojar, 2018) . Thus, it is possible that the suitable sets of hyper-parameters are different across frameworks.",
"cite_spans": [
{
"start": 583,
"end": 606,
"text": "(Popel and Bojar, 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MT experiment: IWSLT16 En \u2194 De",
"sec_num": "4.5"
},
{
"text": "We presented ESPnet-ST for the fast development of end-to-end and cascaded ST systems. We provide various all-in-one example scripts containing corpus-dependent pre-processing, feature extraction, training, and inference. In the future, we will support more corpora and implement novel techniques to bridge the gap between end-to-end and cascaded approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We also support ST-TED and lowresourced Mboshi-French(Godard et al., 2018) recipes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://colab.research.google.com/ github/espnet/notebook/blob/master/st_ demo.ipynb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We found that this degrades the ASR performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "trained their model for more than 2.5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/mattiadg/ FBK-Fairseq-ST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Jun Suzuki for providing helpful feedback for the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tied multitask learning for neural speech translation",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018)",
"volume": "",
"issue": "",
"pages": "82--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT 2018), pages 82-91.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A comparative study on end-to-end speech to text translation",
"authors": [
{
"first": "Parnia",
"middle": [],
"last": "Bahar",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Bieschke",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019)",
"volume": "",
"issue": "",
"pages": "792--799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parnia Bahar, Tobias Bieschke, and Hermann Ney. 2019a. A comparative study on end-to-end speech to text translation. In Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019), pages 792-799.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On using SpecAugment for endto-end speech translation",
"authors": [
{
"first": "Parnia",
"middle": [],
"last": "Bahar",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Zeyer",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 16th International Workshop on Spoken Language Translation 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parnia Bahar, Albert Zeyer, Ralf Schl\u00fcter, and Her- mann Ney. 2019b. On using SpecAugment for end- to-end speech translation. In Proceedings of 16th International Workshop on Spoken Language Trans- lation 2019 (IWSLT 2019).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Rep- resentations (ICLR 2015).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pretraining on high-resource speech recognition improves low-resource speech-to-text translation",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019)",
"volume": "",
"issue": "",
"pages": "58--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pre- training on high-resource speech recognition im- proves low-resource speech-to-text translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT 2019), pages 58-68.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "End-to-end automatic speech translation of audiobooks",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Ali",
"middle": [
"Can"
],
"last": "Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "6224--6228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Laurent Besacier, Ali Can Ko- cabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In Pro- ceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), pages 6224-6228.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Listen and translate: A proof of concept for end-to-end speech-to-text translation",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NIPS 2016 End-to-end Learning for Speech and Audio Processing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text trans- lation. In Proceedings of NIPS 2016 End-to-end Learning for Speech and Audio Processing Work- shop.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Collecting bilingual audio in remote indigenous communities",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Lauren",
"middle": [],
"last": "Gawne",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Gelbart",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Mcalister",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics (COLING 2014)",
"volume": "",
"issue": "",
"pages": "1015--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Lauren Gawne, Katie Gelbart, and Isaac McAlister. 2014. Collecting bilingual audio in remote indigenous communities. In Proceedings of COLING 2014, the 25th International Confer- ence on Computational Linguistics (COLING 2014), pages 1015-1024.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Recent efforts in spoken language translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Signal Processing Magazine",
"volume": "25",
"issue": "3",
"pages": "80--88",
"other_ids": {
"DOI": [
"10.1109/MSP.2008.917989"
]
},
"num": null,
"urls": [],
"raw_text": "F. Casacuberta, M. Federico, H. Ney, and E. Vidal. 2008. Recent efforts in spoken language translation. IEEE Signal Processing Magazine, 25(3):80-88.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wit3: Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Conference of european association for machine translation",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit3: Web inventory of transcribed and translated talks. In Conference of european associa- tion for machine translation, pages 261-268.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4960--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Proceedings of 2016 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), pages 4960-4964.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MuST-C: a Multilingual Speech Translation Corpus",
"authors": [
{
"first": "A",
"middle": [
"Di"
],
"last": "Mattia",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019)",
"volume": "",
"issue": "",
"pages": "2012--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019a. MuST- C: a Multilingual Speech Translation Corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT 2019), pages 2012-2017.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adapting transformer to end-to-end spoken language translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Mattia",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019)",
"volume": "",
"issue": "",
"pages": "1133--1137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia A Di Gangi, Matteo Negri, and Marco Turchi. 2019b. Adapting transformer to end-to-end spoken language translation. In Proceedings of 20th Annual Conference of the International Speech Communi- cation Association (INTERSPEECH 2019), pages 1133-1137.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "One-to-many multilingual end-toend speech translation",
"authors": [
{
"first": "Mattia Antonino Di",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019)",
"volume": "",
"issue": "",
"pages": "585--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mattia Antonino Di Gangi, Matteo Negri, and Marco Turchi. 2019c. One-to-many multilingual end-to- end speech translation. In Proceedings of 2019 IEEE Automatic Speech Recognition and Under- standing Workshop (ASRU 2019), pages 585-592.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A very low resource language speech corpus for computational language documentation experiments",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Godard",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Adda",
"suffix": ""
},
{
"first": "Martine",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Benjumea",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Jamison",
"middle": [],
"last": "Cooper-Leavitt",
"suffix": ""
},
{
"first": "Guy-Noel",
"middle": [],
"last": "Kouarata",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Lamel",
"suffix": ""
},
{
"first": "H\u00e9l\u00e8ne",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Rialland",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Godard, Gilles Adda, Martine Adda-Decker, Juan Benjumea, Laurent Besacier, Jamison Cooper- Leavitt, Guy-Noel Kouarata, Lori Lamel, H\u00e9l\u00e8ne Maynard, Markus Mueller, Annie Rialland, Sebas- tian Stueker, Fran\u00e7ois Yvon, and Marcely Zanon- Boito. 2018. A very low resource language speech corpus for computational language documentation experiments. In Proceedings of the Eleventh Inter- national Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ESPnet-TTS: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit",
"authors": [
{
"first": "Tomoki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Katsuki",
"middle": [],
"last": "Inoue",
"suffix": ""
},
{
"first": "Takenori",
"middle": [],
"last": "Yoshimura",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of 2020 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoki Hayashi, Ryuichi Yamamoto, Katsuki Inoue, Takenori Yoshimura, Shinji Watanabe, Tomoki Toda, Kazuya Takeda, Yu Zhang, and Xu Tan. 2020. ESPnet-TTS: Unified, reproducible, and integrat- able open source end-to-end text-to-speech toolkit. In Proceedings of 2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP 2020).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multilingual end-to-end speech translation",
"authors": [
{
"first": "Hirofumi",
"middle": [],
"last": "Inaguma",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019)",
"volume": "",
"issue": "",
"pages": "570--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirofumi Inaguma, Kevin Duh, Tatsuya Kawahara, and Shinji Watanabe. 2019. Multilingual end-to-end speech translation. In Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019), pages 570-577.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The IWSLT 2018 evaluation campaign",
"authors": [
{
"first": "Niehues",
"middle": [],
"last": "Jan",
"suffix": ""
},
{
"first": "Roldano",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "St\u00fcker",
"middle": [],
"last": "Sebastian",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of 15th International Workshop on Spoken Language",
"volume": "",
"issue": "",
"pages": "2--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niehues Jan, Roldano Cattoni, St\u00fcker Sebastian, Mauro Cettolo, Marco Turchi, and Marcello Fed- erico. 2018. The IWSLT 2018 evaluation campaign. In Proceedings of 15th International Workshop on Spoken Language Translation 2018 (IWSLT 2018), pages 2-6.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An analysis of incorporating an external language model into a sequence-to-sequence model",
"authors": [
{
"first": "Anjuli",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Prabhavalkar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of 2017 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "5824--5828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N Sainath, Zhifeng Chen, and Rohit Prabhavalkar. 2017. An analysis of incorporating an external language model into a sequence-to-sequence model. In Proceedings of 2017 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP 2017), pages 5824-5828.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A comparative study on Transformer vs RNN in speech applications",
"authors": [
{
"first": "Shigeki",
"middle": [],
"last": "Karita",
"suffix": ""
},
{
"first": "Nanxin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Hirofumi",
"middle": [],
"last": "Inaguma",
"suffix": ""
},
{
"first": "Ziyan",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Someki",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"Enrique"
],
"last": "",
"suffix": ""
},
{
"first": "Yalta",
"middle": [],
"last": "Soplin",
"suffix": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019)",
"volume": "",
"issue": "",
"pages": "499--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al. 2019. A comparative study on Transformer vs RNN in speech applications. In Proceedings of 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019), pages 499-456.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Augmenting Librispeech with French translations: A multimodal corpus for direct speech translation evaluation",
"authors": [
{
"first": "Laurent",
"middle": [],
"last": "Ali Can Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kraif",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Can Kocabiyikoglu, Laurent Besacier, and Olivier Kraif. 2018. Augmenting Librispeech with French translations: A multimodal corpus for direct speech translation evaluation. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "OpenSeq2Seq: Extensible toolkit for distributed and mixed precision training of sequenceto-sequence models",
"authors": [
{
"first": "Oleksii",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Gitman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oleksii Kuchaiev, Boris Ginsburg, Igor Gitman, Vi- taly Lavrukhin, Carl Case, and Paulius Micikevi- cius. 2018. OpenSeq2Seq: Extensible toolkit for distributed and mixed precision training of sequence- to-sequence models. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 41-46.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "NeMo: a toolkit for building AI applications using Neural Modules",
"authors": [
{
"first": "Oleksii",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Huyen",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Oleksii",
"middle": [],
"last": "Hrinchuk",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Leary",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kriman",
"suffix": ""
},
{
"first": "Stanislav",
"middle": [],
"last": "Beliaev",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Lavrukhin",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09577"
]
},
"num": null,
"urls": [],
"raw_text": "Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Olek- sii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, et al. 2019. NeMo: a toolkit for building AI applications using Neural Modules. arXiv preprint arXiv:1909.09577.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Subword regularization: Improving neural network translation models with multiple subword candidates",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)",
"volume": "",
"issue": "",
"pages": "66--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (ACL 2018), pages 66-75.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Some insights from translating conversational telephone speech",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "3231--3235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Kumar, Matt Post, Daniel Povey, and Sanjeev Khudanpur. 2014. Some insights from translating conversational telephone speech. In Proceedings of 2014 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP 2014), pages 3231-3235.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural speech synthesis with transformer network",
"authors": [
{
"first": "Naihan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yanqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6706--6713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with trans- former network. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 6706-6713.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "End-to-end speech translation with knowledge distillation",
"authors": [
{
"first": "Yuchen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 20th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1128--1132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019. End-to-end speech translation with knowledge distil- lation. In Proceedings of 20th Annual Conference of the International Speech Communication Associ- ation (INTERSPEECH 2019), pages 1128-1132.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Mixed precision training",
"authors": [
{
"first": "Paulius",
"middle": [],
"last": "Micikevicius",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Jonah",
"middle": [],
"last": "Alben",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "Erich",
"middle": [],
"last": "Elsen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Ginsburg",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Houston",
"suffix": ""
},
{
"first": "Oleksii",
"middle": [],
"last": "Kuchaiev",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Venkatesh",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed pre- cision training. In Proceedings of the 6th Inter- national Conference on Learning Representations (ICLR 2018).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Speech translation: Coupling of recognition and translation",
"authors": [
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of 1999 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "517--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hermann Ney. 1999. Speech translation: Coupling of recognition and translation. In Proceedings of 1999 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP 1999), pages 517-520.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The IWSLT 2019 evaluation campaign",
"authors": [
{
"first": "J",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cattoni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sanabria",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 16th International Workshop on Spoken Language Translation 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Niehues, R. Cattoni, S. St\u00fcker, M. Negri, M. Turchi, E. Salesky, R. Sanabria, L. Barrault, L. Specia, and M Federico. 2019. The IWSLT 2019 evaluation campaign. In Proceedings of 16th International Workshop on Spoken Language Translation 2019 (IWSLT 2019).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Wavenet: A generative model for raw audio",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Dieleman",
"suffix": ""
},
{
"first": "Heiga",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"W"
],
"last": "Senior",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.03499"
]
},
"num": null,
"urls": [],
"raw_text": "Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. Fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics (ACL 2002), pages 311-318.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "PyTorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceed- ings of Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pages 8024-8035.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Modeling punctuation prediction as machine translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Peitz",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "Mauser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 8th International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "238--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Peitz, Markus Freitag, Arne Mauser, and Her- mann Ney. 2011. Modeling punctuation prediction as machine translation. In Proceedings of 8th Inter- national Workshop on Spoken Language Translation 2011 (IWSLT 2011), pages 238-245.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Training Tips for the Transformer Model",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "110",
"issue": "1",
"pages": "43--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Popel and Ond\u0159ej Bojar. 2018. Training Tips for the Transformer Model. The Prague Bulletin of Mathematical Linguistics, 110(1):43-70.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Improved speech-to-text translation with the Fisher and Callhome Spanish-English speech translation corpus",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of 10th International Workshop on Spoken Language Translation 2013",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khu- danpur. 2013. Improved speech-to-text transla- tion with the Fisher and Callhome Spanish-English speech translation corpus. In Proceedings of 10th International Workshop on Spoken Language Trans- lation 2013 (IWSLT 2013).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 2011 IEEE Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The kaldi speech recognition toolkit. In Proceedings of 2011 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2011).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Wav2Letter++: A fast open-source speech recognition system",
"authors": [
{
"first": "Vineel",
"middle": [],
"last": "Pratap",
"suffix": ""
},
{
"first": "Awni",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "Qiantong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Synnaeve",
"suffix": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Liptchinsky",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "6460--6464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vineel Pratap, Awni Hannun, Qiantong Xu, Jeff Cai, Jacob Kahn, Gabriel Synnaeve, Vitaliy Liptchinsky, and Ronan Collobert. 2019. Wav2Letter++: A fast open-source speech recognition system. In Pro- ceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), pages 6460-6464.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Fastspeech: Fast, robust and controllable text to speech",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yangjun",
"middle": [],
"last": "Ruan",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "3165--3174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. In Ad- vances in Neural Information Processing Systems 32 (NeurIPS 2019), pages 3165-3174.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "How2: A large-scale dataset for multimodal language understanding",
"authors": [
{
"first": "Ramon",
"middle": [],
"last": "Sanabria",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Caglayan",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Palaskar",
"suffix": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Metze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Lo\u00efc Barrault, Lucia Specia, and Florian Metze. 2018. How2: A large-scale dataset for multimodal language understanding. In Proceed- ings of the Workshop on Visually Grounded Interac- tion and Language (ViGIL).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Vectorized Beam Search for CTC-Attention-Based Speech Recognition",
"authors": [
{
"first": "Hiroshi",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Moritz",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019)",
"volume": "",
"issue": "",
"pages": "3825--3829",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2019-2860"
]
},
"num": null,
"urls": [],
"raw_text": "Hiroshi Seki, Takaaki Hori, Shinji Watanabe, Niko Moritz, and Jonathan Le Roux. 2019. Vector- ized Beam Search for CTC-Attention-Based Speech Recognition. In Proceedings of 20th Annual Con- ference of the International Speech Communication Association (INTERSPEECH 2019), pages 3825- 3829.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL 2016), pages 1715-1725.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Lingvo: a modular and scalable framework for sequence-to-sequence modeling",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Mia",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Anjuli",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Sainath",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.08295"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, et al. 2019. Lingvo: a modular and scalable framework for sequence-to-sequence modeling. arXiv preprint arXiv:1902.08295.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Ron",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zongheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Skerry-Ryan",
"suffix": ""
},
{
"first": "Rif",
"middle": [
"A"
],
"last": "Saurous",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Agiomyrgiannakis",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of 2017 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4779--4783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, R. J. Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, and Yonghui Wu. 2018. Natural TTS synthesis by con- ditioning WaveNet on Mel spectrogram predictions. In Proceedings of 2017 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP 2017), pages 4779-4783.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Attention-passing models for robust and data-efficient end-to-end speech translation",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "313--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2019. Attention-passing models for ro- bust and data-efficient end-to-end speech translation. Transactions of the Association for Computational Linguistics, 7:313-325.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Chainer: A deep learning framework for accelerating the research cycle",
"authors": [
{
"first": "Seiya",
"middle": [],
"last": "Tokui",
"suffix": ""
},
{
"first": "Ryosuke",
"middle": [],
"last": "Okuta",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Akiba",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Niitani",
"suffix": ""
},
{
"first": "Toru",
"middle": [],
"last": "Ogawa",
"suffix": ""
},
{
"first": "Shunta",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Shuji",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Kota",
"middle": [],
"last": "Uenishi",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hiroyuki Yamazaki",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2019)",
"volume": "",
"issue": "",
"pages": "2002--2011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seiya Tokui, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuji Suzuki, Kota Uenishi, Brian Vogel, and Hiroyuki Ya- mazaki Vincent. 2019. Chainer: A deep learn- ing framework for accelerating the research cycle. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining (KDD 2019), pages 2002-2011.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Tensor2Tensor for neural machine translation",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Chollet",
"suffix": ""
},
{
"first": "Aidan",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Sepassi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "193--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, \u0141ukasz Kaiser, Nal Kalchbrenner, Niki Par- mar, Ryan Sepassi, Noam Shazeer, and Jakob Uszko- reit. 2018. Tensor2Tensor for neural machine trans- lation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Ameri- cas (Volume 1: Research Papers), pages 193-199, Boston, MA. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Bridging the gap between pretraining and fine-tuning for end-to-end speech translation",
"authors": [
{
"first": "Chengyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhenglu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI conference on artificial intelligence 2020 (AAAI 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020. Bridging the gap between pre- training and fine-tuning for end-to-end speech trans- lation. In Proceedings of the AAAI conference on artificial intelligence 2020 (AAAI 2020).",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "ESPnet: Endto-end speech processing toolkit",
"authors": [
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Shigeki",
"middle": [],
"last": "Karita",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Jiro",
"middle": [],
"last": "Nishitoba",
"suffix": ""
},
{
"first": "Yuya",
"middle": [],
"last": "Unno",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"Enrique"
],
"last": "",
"suffix": ""
},
{
"first": "Yalta",
"middle": [],
"last": "Soplin",
"suffix": ""
},
{
"first": "Jahn",
"middle": [],
"last": "Heymann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Wiesner",
"suffix": ""
},
{
"first": "Nanxin",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of 19th Annual Conference of the International Speech Communication Association (INTER-SPEECH 2018)",
"volume": "",
"issue": "",
"pages": "2207--2211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson En- rique Yalta Soplin, Jahn Heymann, Matthew Wies- ner, Nanxin Chen, et al. 2018. ESPnet: End- to-end speech processing toolkit. In Proceed- ings of 19th Annual Conference of the Interna- tional Speech Communication Association (INTER- SPEECH 2018), pages 2207-2211.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Hybrid CTC/attention architecture for end-to-end speech recognition",
"authors": [
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Suyoun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Hershey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hayashi",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Journal of Selected Topics in Signal Processing",
"volume": "11",
"issue": "8",
"pages": "1240--1253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi. 2017. Hybrid CTC/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Sig- nal Processing, 11(8):1240-1253.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Sequence-tosequence models can directly translate foreign speech",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ron",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of 18th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "2625--2629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to- sequence models can directly translate foreign speech. In Proceedings of 18th Annual Conference of the International Speech Communication Associ- ation (INTERSPEECH 2017), pages 2625-2629.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram",
"authors": [
{
"first": "Ryuichi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Eunwoo",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jae-Min",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of 2020 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryuichi Yamamoto, Eunwoo Song, and Jae-Min Kim. 2020. Parallel WaveGAN: A fast waveform genera- tion model based on generative adversarial networks with multi-resolution spectrogram. In Proceedings of 2020 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP 2020).",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Open source toolkit for speech to text translation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Zenkel",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2018,
"venue": "Prague Bull. Math. Linguistics",
"volume": "111",
"issue": "",
"pages": "125--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Zenkel, Matthias Sperber, Jan Niehues, Markus M\u00fcller, Ngoc-Quan Pham, Sebastian St\u00fcker, and Alex Waibel. 2018. Open source toolkit for speech to text translation. Prague Bull. Math. Lin- guistics, 111:125-135.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "RETURNN as a generic flexible neural toolkit with application to translation and speech recognition",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Zeyer",
"suffix": ""
},
{
"first": "Tamer",
"middle": [],
"last": "Alkhouli",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "128--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert Zeyer, Tamer Alkhouli, and Hermann Ney. 2018. RETURNN as a generic flexible neural toolkit with application to translation and speech recognition. In Proceedings of ACL 2018, System Demonstrations, pages 128-133.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Directory structure of ESPnet-ST",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "encoder init. ( 1 ) 45.03 + MT decoder init.",
"uris": null,
"num": null
},
"TABREF0": {
"content": "<table><tr><td>, 2018)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "Framework comparison on supported tasks in January, 2020."
},
"TABREF1": {
"content": "<table><tr><td/><td/><td>Es \u2192 En</td></tr><tr><td/><td>Model</td><td>Fisher</td><td>CallHome</td></tr><tr><td/><td/><td colspan=\"2\">dev dev2 test devtest evltest</td></tr><tr><td>E2E</td><td>Char RNN + + MT decoder init. ( 2 )</td><td colspan=\"2\">46.25 47.60 46.72 17.62 17.50</td></tr><tr><td/><td>+ SpecAugment ( 3 )</td><td colspan=\"2\">48.94 49.32 48.39 18.83 18.67</td></tr><tr><td/><td>+ Ensemble 3 models ( 1 + 2 + 3 )</td><td colspan=\"2\">50.76 52.02 50.85 19.91 19.36</td></tr><tr><td/><td>Char RNN ASR \u2192 Char RNN MT (Weiss et al., 2017)</td><td colspan=\"2\">45.10 46.10 45.50 16.20 16.60</td></tr><tr><td>Cascade</td><td colspan=\"3\">Char RNN ASR \u2192 Char RNN MT (Inaguma et al., 2019) \u2663 37.3 39.6 38.6 16.8 ESPnet-ST</td><td>16.5</td></tr><tr><td/><td>Transformer ASR \u2666 \u2192 Transformer MT</td><td colspan=\"2\">41.96 43.46 42.16 19.56 19.82</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "ASR-MTL (Weiss et al., 2017) 48.30 49.10 48.70 16.80 17.40 ESPnet-ST (Transformer) ASR-MTL (multi-task w/ ASR) 46.64 47.64 46.45 16.80 16.80 + MT-MTL (multi-task w/ MT) 47.17 48.20 46.99 17.51 17.64 ASR encoder init. ( 1 ) 46.25 47.11 46.21 17.35 16.94"
},
"TABREF2": {
"content": "<table><tr><td/><td>Model</td><td>En \u2192 Fr</td></tr><tr><td/><td>Transformer + ASR/MT-trans + KD 1</td><td>17.02</td></tr><tr><td/><td>+ Ensemble 3 models</td><td>17.8</td></tr><tr><td/><td>Transformer + PT + adaptor 2</td><td>16.80</td></tr><tr><td/><td>Transformer + PT + SpecAugment 3</td><td>17.0</td></tr><tr><td/><td>RNN + TCEN 4,\u2663</td><td>17.05</td></tr><tr><td>E2E</td><td>ESPnet-ST (Transformer)</td></tr><tr><td/><td>ASR-MTL</td><td>15.30</td></tr><tr><td/><td>+ MT-MLT</td><td>15.47</td></tr><tr><td/><td>ASR encoder init.</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "BLEU of ST systems on Fisher-CallHome Spanish corpus. \u2663 Implemented w/ ESPnet. \u2666 w/ SpecAugment."
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table",
"text": "BLEU of ST systems on How2 corpus tions are available in conf/train.yaml and conf/decode.yaml, respectively."
},
"TABREF4": {
"content": "<table><tr><td/><td>Transformer \u2192 Transformer ASR 1</td><td>18.5</td><td>21.5</td><td>27.9</td><td>22.5</td><td>16.8</td><td>11.1</td><td>22.2</td><td>18.9</td></tr><tr><td>Cascade</td><td>ESPnet-ST</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>Transformer ASR \u2192</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "1,\u2663 17.30 20.10 26.90 20.80 16.50 10.50 18.80 16.80 ESPnet-ST (Transformer) ASR encoder/MT decoder init. 22.33 27.26 31.54 27.84 20.91 15.32 26.86 22.81 + SpecAugment 22.91 28.01 32.69 27.96 21.90 15.75 27.43 23.75 Transformer MT 23.65 29.04 33.84 28.68 22.68 16.39 27.91 24.04"
},
"TABREF5": {
"content": "<table><tr><td/><td/><td>En\u2192De</td><td/><td/><td>De\u2192En</td><td/></tr><tr><td colspan=\"7\">Framework test2012 test2013 test2014 test2012 test2013 test2014</td></tr><tr><td>Fairseq</td><td>27.73</td><td>29.45</td><td>25.14</td><td>32.25</td><td>34.23</td><td>29.49</td></tr><tr><td>ESPnet-ST</td><td>26.92</td><td>28.88</td><td>24.70</td><td>32.19</td><td>33.46</td><td>29.22</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "BLEU of ST systems on Must-C corpus. \u2663 Implemented w/ Fairseq. 1 (DiGangi et al., 2019b)"
},
"TABREF6": {
"content": "<table><tr><td>4.2 Libri-trans (En\u2192 Fr)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table",
"text": "BLEU of MT systems on IWSLT 2016 corpus"
}
}
}
}