1.0 P15-1001 A00-1004 b'1 Introduction Neural machine translation (NMT) is a recently introduced approach to solving machine translation (Kalchbrenner and Blunsom, 2013; Bahdanau et al, 2015; Sutskever et al, 2014). In neural machine translation, one builds a single neural network that reads a source sentence and generates its translation. The whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence, using the bilingual corpus. The NMT models have shown to perform as well as the most widely used conventional translation systems (Sutskever et al, 2014; Bahdanau et al, 2015). Neural machine translation has a number of advantages over the existing statistical machine translation system, specifically, the phrase-based system (Koehn et al, 2003). First, NMT requires a minimal set of domain knowledge. For instance, all of the models proposed in (Sutskever et al, 2014), (Bahdanau et al, 2015) or (Kalchbrenner and Blunsom, 2013) do not assume any linguistic property in both source and target sentences except that they are sequences of words. Second, the whole system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features whose weights are then tuned jointly. Lastly, the memory footprint of the NMT model is often much smaller than the existing system which relies on maintaining large tables of phrase pairs. Despite these advantages and promising results, there is a major limitation in NMT compared to the existing phrase-based approach. That is, the number of target words must be limited. This is mainly because the complexity of training and using an NMT model increases as the number of target words increases. A usual practice is to construct a target vocabulary of the K most frequent words (a socalled shortlist), where K is often in the range of 30k (Bahdanau et al, 2015) to 80k (Sutskever et al., 2014). Any word not included in this vocabulary is mapped to a special token representing an unknown word [UNK]. This approach works well when there are only a few unknown words in the target sentence, but it has been observed 1 that the translation performance degrades rapidly as the number of unknown words increases (Cho et al, 2014a; Bahdanau et al, 2015). In this paper, we propose an approximate training algorithm based on (biased) importance sampling that allows us to train an NMT model with a much larger target vocabulary. The proposed algorithm effectively keeps the computational complexity during training at the level of using only a small subset of the full vocabulary. Once the model with a very large target vocabulary is trained, one can choose to use either all the target words or only a subset of them. We compare the proposed algorithm against the baseline shortlist-based approach in the tasks of English?French and English?German translation using the NMT model introduced in (Bahdanau et al, 2015). The empirical results demonstrate that we can potentially achieve better translation performance using larger vocabularies, and that our approach does not sacrifice too much speed for both training and decoding. Furthermore, we show that the modeltrained with this algorithm gets the best translation performance yet achieved by single NMT models on the WMT?14 English?French translation task. 2 Neural Machine Translation and Limited Vocabulary Problem In this section, we briefly describe an approach to neural machine translation proposed recently in (Bahdanau et al, 2015). Based on this description we explain the issue of limited vocabularies in neural machine translation. 2.1 Neural Machine Translation Neural machine translation is a recently proposed approach to machine translation, which uses a single neural network trained jointly to maximize the translation performance (Forcada and ? Neco, 1997; Kalchbrenner and Blunsom, 2013; Cho et al., 2014b; Sutskever et al, 2014; Bahdanau et al, 2015). Neural machine translation is often implemented as the encoder?decoder network. The encoder reads the source sentence x = (x 1 , . . . , x T ) and encodes it into a sequence of hidden states h = (h 1 , ? ? ? , h T ): h t = f (x t , h t?1 ) . (1) Then, the decoder, another recurrent neural network, generates a corresponding translation y = (y 1 , ? ? ? , y T ? ) based on the encoded sequence of hidden states h: p(y t | y