ACL-OCL / Base_JSON /prefixA /json /autosimtrans /2020.autosimtrans-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
68.5 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:09:43.192559Z"
},
"title": "End-to-End Speech Translation with Adversarial Training",
"authors": [
{
"first": "Xuancai",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": "xcli@hit-mtlab.net"
},
{
"first": "Kehai",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "khchen@nict.go.jp"
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": "tjzhao@hit.edu.cn"
},
{
"first": "Muyun",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": "yangmuyun@hit.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "End-to-end speech translation usually leverages audio-to-text parallel data to train an available speech translation model which has shown impressive results on various speech translation tasks. Due to the artificial cost of collecting audio-to-text parallel data, the speech translation is a natural low-resource translation scenario, which greatly hinders its improvement. In this paper, we proposed a new adversarial training method to leverage target monolingual data to relieve the lowresource shortcoming of speech translation. In our method, the existing speech translation model is considered as a Generator to gain a target language output, and another neural Discriminator is used to guide the distinction between outputs of speech translation model and true target monolingual sentences. Experimental results on the CCMT 2019-BSTC dataset speech translation task demonstrate that the proposed methods can significantly improve the performance of the end-to-end speech translation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "End-to-end speech translation usually leverages audio-to-text parallel data to train an available speech translation model which has shown impressive results on various speech translation tasks. Due to the artificial cost of collecting audio-to-text parallel data, the speech translation is a natural low-resource translation scenario, which greatly hinders its improvement. In this paper, we proposed a new adversarial training method to leverage target monolingual data to relieve the lowresource shortcoming of speech translation. In our method, the existing speech translation model is considered as a Generator to gain a target language output, and another neural Discriminator is used to guide the distinction between outputs of speech translation model and true target monolingual sentences. Experimental results on the CCMT 2019-BSTC dataset speech translation task demonstrate that the proposed methods can significantly improve the performance of the end-to-end speech translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Typically, a traditional speech translation (ST) system usually consists of two components: an automatic speech recognition (ASR) model and a machine translation (MT) model. Firstly, the speech recognition module transcribes the source language speech into the source language utterances (Chan et al., 2016; Chiu et al., 2018) . Secondly, the machine translation module translates the source language utterances into the target language utterances (Bahdanau et al., 2014 ). Due to the success of end-to-end approaches in both automatic speech recognition and machine translation, researchers are increasingly interested in end-to-end speech translation. And, it has shown impressive results on various speech translation tasks (Duong et al., 2016; B\u00e9rard et al., 2016 B\u00e9rard et al., , 2018 .",
"cite_spans": [
{
"start": 288,
"end": 307,
"text": "(Chan et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 308,
"end": 326,
"text": "Chiu et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 448,
"end": 470,
"text": "(Bahdanau et al., 2014",
"ref_id": "BIBREF1"
},
{
"start": 727,
"end": 747,
"text": "(Duong et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 748,
"end": 767,
"text": "B\u00e9rard et al., 2016",
"ref_id": "BIBREF4"
},
{
"start": 768,
"end": 789,
"text": "B\u00e9rard et al., , 2018",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, due to the artificial cost of collecting audio-to-text parallel data, speech translation is a natural low-resource translation scenario, which greatly hinders its improvement. Actually, the audio-to-text parallel data has only tens to hundreds of hours which are equivalent to about hundreds of thousands of bilingual sentence pairs. Thus, it is far from enough for the training of a high-quality speech translation system compare to bilingual parallel data of millions or even tens of millions for training a high-quality text-only NMT. Recently, there have some recent works that explore to address this issue. Bansal et al. (2018) pre-trained an ASR model on high-resource data, and then finetuned the ASR model for low-resource scenarios. Weiss et al. (2017) and Anastasopoulos and Chiang (2018) proposed multi-task learning methods to train the ST model with ASR, ST, and NMT tasks simultaneously. Liu et al. (2019) proposed a Knowledge Distillation approach which utilizes a text-only MT model to guide the ST model because there is a huge performance gap between end-toend ST and MT model. Despite their success, these approaches still need additional labeled data, such as the source language speech, source language transcript, and target language translation.",
"cite_spans": [
{
"start": 622,
"end": 642,
"text": "Bansal et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 752,
"end": 771,
"text": "Weiss et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 776,
"end": 808,
"text": "Anastasopoulos and Chiang (2018)",
"ref_id": "BIBREF0"
},
{
"start": 912,
"end": 929,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we proposed a new adversarial training method to leverage target monolingual data to relieve the low-resource shortcoming of end-to-end speech translation. The proposed method consists of a generator model and a discriminator model. Specifically, the existing speech translation model is considered as a Generator to gain a target language output, and another neural Discriminator is used to guide the distinction between outputs of speech translation model and true target monolingual sentences. In particular, the Generator and the Discriminator are trained iteratively to challenge and learn from each other step by step to gain a better speech translation model. Experimental results on CCMT 2019-BSTC dataset speech translation task demonstrate that the proposed methods can significantly improve the performance of the endto-end speech translation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The framework for the method of adversarial training consists of a generator and a discriminator. In this paper, Generator is the existing end-to-end ST model, which is based on the encoder-decoder model with an attention mechanism (B\u00e9rard et al., 2016) . The discriminator is a model based on a convolutional neural network, and the output is a quality score. The discriminator is aiming to get higher quality scores for real text and lower quality scores for the output of the ST model in the discriminator training step. In other words, the discriminator is expected to distinguish the input text as much as possible. Meanwhile, our method can not only leverage the ground truth to supervise the training of ST model\uff0cbut also make use of the discriminator to enhance the output of the ST model by using target monolingual data, as shown in Figure 1 .",
"cite_spans": [
{
"start": 232,
"end": 253,
"text": "(B\u00e9rard et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 843,
"end": 851,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "For the end-to-end speech translate, we chose an encoder-decoder model with attention. It takes as an input sequence of audio features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "x = (x 1 , x 2 , \u2022 \u2022 \u2022 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "x t ) and a output sequence of words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "y = (y 1 , y 2 , \u2022 \u2022 \u2022 , y m ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "The speech encoder is a pyramid bidirectional long short term memory (pBLSTM) (Chan et al., 2016; Hochreiter and Schmidhuber, 1997) . It transforms the speech feature",
"cite_spans": [
{
"start": 78,
"end": 97,
"text": "(Chan et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 98,
"end": 131,
"text": "Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "x = (x 1 , x 2 , \u2022 \u2022 \u2022 , x t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "into a high level representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "H = (h 1 , h 2 , \u2022 \u2022 \u2022 , h n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": ", where n \u2264 t. In the pBLSTM, the outputs of two adjacent time steps of the current layer are concatenated and passed to the next layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "h i j = pBLSTM(h i j\u22121 , [h i\u22121 2j , h i\u22121 2j+1 ]). (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "Also, the pBLSTM can reduce the length of the encoder input from t to n. In our experiment, we stack 3 layers of the pBLSTM, so we were able to reduce the time step 8 times. The decoder is an attention-based LSTM, and it is a word-level decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = Attention(s i , h), s i = LSTM(s i\u22121 , c i\u22121 , y i\u22121 ), y i = Generate(s i , c i ),",
"eq_num": "(2)"
}
],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "where the Attention function is a location-aware attention mechanism (Chorowski et al., 2015) , and the Generate function is a feed-forward network to compute a score for each symbol in target vocabulary.",
"cite_spans": [
{
"start": 69,
"end": 93,
"text": "(Chorowski et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "2.1"
},
{
"text": "Discriminator takes either real text or ST translations as input and outputs a scalar QS as the quality score. For the discriminator, we use a traditional convolution neural network (CNN) (Kalchbrenner et al., 2016) which focuses on capturing local repeating features and has a better computational efficiency than recurrent neural network (RNN) (LeCun et al., 2015). The real text of the target language is encoded as a sequence of one-hot vectors y = (y 1 , y 2 , \u2022 \u2022 \u2022 , y m ), and the output generated by the ST model is denoted as a sequence of vectors y = ( y 1 , y 2 , \u2022 \u2022 \u2022 , y n ). The sequence of vectors y or y are given as input to a single layer neural network. The output of the neural network is fed into a stack of two onedimensional CNN layers and an average pooling layer. Then we use a linear layer to get the quality score. Training the discriminator is easy to overfit because the probability distribution for ST model output is different from the one-hot encoding of the real text. To address this problem, we used earthmover distance in WGAN (Martin Arjovsky and Bottou, 2017) to estimate the distance between the ST model output and real text. The loss function of the discriminator is the standard WGAN loss, and adds a gradient penalty (Gulrajani et al., 2017) . Formally, the loss function of the discriminator as below:",
"cite_spans": [
{
"start": 188,
"end": 215,
"text": "(Kalchbrenner et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 1262,
"end": 1286,
"text": "(Gulrajani et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminator",
"sec_num": "2.2"
},
{
"text": "Loss D = \u03bb 1 {E y\u223cPst [D( y)] \u2212 E y\u223cP real [D(y)]} + \u03bb 2 E\u0177 \u223cP\u0177 [( \u0177 ||D(\u0177)|| \u2212 1) 2 ], (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminator",
"sec_num": "2.2"
},
{
"text": "where \u03bb 1 and \u03bb 2 are hyper-parameter, P st is the distribution of ST model y and P real is the distribution of real text y, D(y) is the quality score for y given by discriminator,\u0177 are samples generate by randomly interpolating between y and y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminator",
"sec_num": "2.2"
},
{
"text": "Both the ST model and the discriminator are trained iteratively from scratch. For the ST model training step, the parameters of discriminator are fixed. We train the ST model by minimizing the sequence loss Loss ST which is the cross-entropy between the ground truth and output of the ST model. And at the same time, the discriminator generates a quality score QS for the output of the ST model. Formally, the final loss function in the training process is as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "2.3"
},
{
"text": "Loss G = \u03bb st Loss ST \u2212 (1 \u2212 \u03bb st )QS, (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "2.3"
},
{
"text": "where \u03bb st \u2208 [0,1] is hyper-parameter. For the discriminator training step, the parameters of ST model are fixed. The discriminator uses the probability distribution of the ST model output and the real text for training. The specific learning process is shown in Algorithm 1. Note that the discriminator is only used in the training of the model while it is not used during the decoding. Once the training ends, the ST model implicitly utilizes the translation knowledge learned from discriminator to decode the input audio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "2.3"
},
{
"text": "Require: G, the Generator; D, the Discriminator; dataset(X,Y), speech translation parallel corpus. Ensure: G , generator after adversarial training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Adversarial Training",
"sec_num": null
},
{
"text": "1: for iteration of adversarial training do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Adversarial Training",
"sec_num": null
},
{
"text": "for iteration of training G do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Adversarial Training",
"sec_num": null
},
{
"text": "Sample a subset(X batch ,Y batch ) from dataset(X,Y) 4: end for 14: end for 3 Experiment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Adversarial Training",
"sec_num": null
},
{
"text": "Y batch =G(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Adversarial Training",
"sec_num": null
},
{
"text": "We conduct experiments on CCMT 2019-BSTC (Yang et al., 2019) which is collected from the Chinese mandarin talks and reports as shown in Table 1 . It contains 50 hours of real speeches, including three parts, the audio files in Chinese, the transcripts, and the English translations. We keep the original data partitions of the data set and segmented the long conversations used for simultaneous interpretation into short utterances. ",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "3.1"
},
{
"text": "We process Speech files, to extract 40-dimensional Filter bank features with a step size of 10ms and window size of 25ms. To shorten the training time, we ignored the utterances in the corpus that were longer than 30 seconds. We lowercase and tokenize all English text, and normalize the punctuation. a word-level vocabulary of size 17k is used for target language in English. Then the text data are represented by sequences of 1700dimensional one-hot vectors. Our ST model uses 3 layers of pBLSTM with 256 units per direction as the encoder, and 512-dimensional location-aware attention was used in the attention layer. The decoder was a 2 layers LSTM with 512 units and 2 layers neural network with 512 units to predict words in the vocabulary. For the discriminator model, we use a linear layer with 128 units at the bottom of the model. Then, using 2 layers onedimensional CNN, from bottom to top, the window size is 2, the stride is 1, and the window size is 3, the stride is 1. Adam (Kingma and Ba, 2014) was used as the optimization function to train our model, which has a learning rate of 0.0001 and a mini-batch size of 8. The hyper-parameters \u03bb st , \u03bb 1 and \u03bb 2 are 0.5, 0.0001 and 10 respectively. And the train frequency of the ST model is 5 times then the discriminator. We used the BLEU (Papineni et al., 2002) metric to evaluate our ST models. We try five settings on Speech Translation. The Pipeline model cascades an ASR and an MT model. For the ASR model, we use an end-to-end speech recognition model similar to LAS and trained on CCMT 2019-BSTC. For the MT model, we use open source toolkit OpenNMT (Klein et al., 2017) to train an NMT model. The end-to-end model (described in section 2) does not make any use of source language transcripts. The pre-trained model is the same as the end-to-end model, but its encoder is initialized with a pre-trained ASR model. And the pretrained ASR model is trained using Aishell (Bu et al., 2017) , a 178 hours Chinese Mandarin speech corpus, which has the same language as our chosen speech translation corpus. The multitask model is a one-to-many method, where the ASR and ST tasks share an encoder. The Adversarial Training is the approach proposed in this paper. Table 2 shows the result of the different models on the validation set of CCMT 2019-BSTC. From this result, we can find that the end-to-end methods including pre-trained, multitask and Adversarial Training all get results comparable to the Pipeline model. Among them, the pre-trained model gets the best results. Our analysis is that this model uses a larger scale of speech corpus for pre-training, thus introducing more information into the model. We can see that the Adversarial Training method can obtain 19.1 BLEU, which is an improvement of 1.4 BLEU over the end-to-end baseline model, and even better than the multitask method. The multitasking approach uses transcription of source language speech, and our proposed approach is superior to it without using other information.",
"cite_spans": [
{
"start": 1302,
"end": 1325,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 1620,
"end": 1640,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 1938,
"end": 1955,
"text": "(Bu et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 2226,
"end": 2233,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.2"
},
{
"text": "19.4 end-to-end 17.7 pre-trained 20.4 multitask 18.9 Adversarial Training 19.1 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ST pipeline",
"sec_num": null
},
{
"text": "In this paper, we present the Adversarial Training approach to improve the end-to-end speech translation model. We applied GAN to the speech translation task and achieved good results in the experimental results. Since GAN's structure is used, this method can be applied to any end-toend speech translation model. Unlike the multitask, pre-trained, and knowledge distillation previously proposed, this method requires the use of additional parallel corpus, which is very expensive to collect. In the future, we will experiment with unpaired text in order to be able to use this method to utilize an infinite amount of spoken text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tied multitask learning for neural speech translation",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.06655"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. arXiv preprint arXiv:1802.06655.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pretraining on high-resource speech recognition improves low-resource speech-to-text translation",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Kamper",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.01431"
]
},
"num": null,
"urls": [],
"raw_text": "Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018. Pre- training on high-resource speech recognition improves low-resource speech-to-text translation. arXiv preprint arXiv:1809.01431.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "End-toend automatic speech translation of audiobooks",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Ali",
"middle": [
"Can"
],
"last": "Kocabiyikoglu",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6224--6228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Laurent Besacier, Ali Can Ko- cabiyikoglu, and Olivier Pietquin. 2018. End-to- end automatic speech translation of audiobooks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6224-6228. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Listen and translate: A proof of concept for end-to-end speech-to-text translation",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.01744"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre B\u00e9rard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. arXiv preprint arXiv:1612.01744.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Bu",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Na",
"suffix": ""
},
{
"first": "Bengu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2017,
"venue": "the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng. 2017. Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline. In 2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA), pages 1-5. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "W",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4960--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Chan, N. Jaitly, Q. Le, and O. Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4960-4964.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "State-of-the-art speech recognition with sequence-to-sequence models",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "T",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prabhavalkar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gonina",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bacchiani",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4774--4778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani. 2018. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4774-4778.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Attention-based models for speech recognition",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Jan K Chorowski",
"suffix": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Serdyuk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "577--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recogni- tion. In Advances in neural information processing systems, pages 577-585.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An attentional model for speech translation without transcription",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "949--959",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. 2016. An attentional model for speech translation without transcription. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 949-959.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved training of wasserstein gans",
"authors": [
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Faruk",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Arjovsky",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dumoulin",
"suffix": ""
},
{
"first": "Aaron C",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5767--5777",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767-5777.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural machine translation in linear time",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.10099"
]
},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.02810"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep learning. nature",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "521",
"issue": "",
"pages": "436--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436-444.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "End-to-end speech translation with knowledge distillation",
"authors": [
{
"first": "Yuchen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.08075"
]
},
"num": null,
"urls": [],
"raw_text": "Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019. End-to-end speech translation with knowledge distillation. arXiv preprint arXiv:1904.08075.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Wasserstein generative adversarial networks",
"authors": [
{
"first": "Arjovsky",
"middle": [],
"last": "Sc Martin",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34 th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SC Martin Arjovsky and Leon Bottou. 2017. Wasser- stein generative adversarial networks. In Pro- ceedings of the 34 th International Conference on Machine Learning, Sydney, Australia.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for com- putational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sequence-tosequence models can directly translate foreign speech",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ron",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.08581"
]
},
"num": null,
"urls": [],
"raw_text": "Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to- sequence models can directly translate foreign speech. arXiv preprint arXiv:1703.08581.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Ccmt 2019 machine translation evaluation report",
"authors": [
{
"first": "Muyun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xixin",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Jiayi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiliyaer",
"middle": [],
"last": "Jiaermuhamaiti",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "China Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "105--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muyun Yang, Xixin Hu, Hao Xiong, Jiayi Wang, Yiliyaer Jiaermuhamaiti, Zhongjun He, Weihua Luo, and Shujian Huang. 2019. Ccmt 2019 machine translation evaluation report. In China Conference on Machine Translation, pages 105-128. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Proposed end-to-end speech translation with adversarial training",
"num": null,
"uris": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "BLEU scores of the speech translation experiments"
}
}
}
}