Papers
arxiv:1706.03762

Attention Is All You Need

Published on Jun 12, 2017
Authors:
,
,
,
,
,

Abstract

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Community

Introduces language transformer architecture for natural language processing (NLP). Transformer: Input (encoded) added with positional encodings (1D sine and cosine of different frequencies) given to encoder layers (multi-headed self-attention and linear layers in a residual setup), then the encoder output along with previous output (encoding added with positional embeddings) is given to decoder layers (masked multi-headed self-attention, multi-headed cross attention from encoder output, and linear layers in a residual setup). The residual setup has layer normalization. Attention is formulated as an information retrieval problem: Project (the same) input to key, query, and value embeddings; dot-product of query and key give the values to be retrieved (take softmax of dot product, normalize, and then multiply with value for retrieval). For multi-head: split and do this in parallel, along with concatenation. Trained for language translation (English to French and German), good BELU at low training cost. Attention head visualization in appendix. From Google, University of Toronto.

Links: Google Blog, GitHub (tensorflow/tensor2tensor, Unofficial PyTorch), PapersWithCode, Harvard NLP: The Annotated Transformer (updated)

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 19

Browse 19 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1706.03762 in a dataset README.md to link it from this page.

Spaces citing this paper 36

Collections including this paper 36