mike dupont
commited on
Commit
•
16755b1
1
Parent(s):
823d722
attention
Browse files- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need-3_1.png +3 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need-4_1.png +3 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need-4_2.png +3 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need.html +15 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need.org +712 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need.pdf +0 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need_ind.html +18 -0
- 2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Needs.html +558 -0
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need-3_1.png
ADDED
![]() |
Git LFS Details
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need-4_1.png
ADDED
![]() |
Git LFS Details
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need-4_2.png
ADDED
![]() |
Git LFS Details
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need.html
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html>
|
2 |
+
<html>
|
3 |
+
<head>
|
4 |
+
<title>Attention is All you Need</title>
|
5 |
+
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
|
6 |
+
<meta name="generator" content="pdftohtml 0.36"/>
|
7 |
+
<meta name="author" content="Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin"/>
|
8 |
+
<meta name="date" content="2024-02-05T09:33:17+00:00"/>
|
9 |
+
<meta name="subject" content="Neural Information Processing Systems http://nips.cc/"/>
|
10 |
+
</head>
|
11 |
+
<frameset cols="100,*">
|
12 |
+
<frame name="links" src="Attention is All You Need_ind.html"/>
|
13 |
+
<frame name="contents" src="Attention is All You Needs.html"/>
|
14 |
+
</frameset>
|
15 |
+
</html>
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need.org
ADDED
@@ -0,0 +1,712 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<<1>>Attention Is All You Need\\
|
2 |
+
Ashish Vaswani∗\\
|
3 |
+
Noam Shazeer∗\\
|
4 |
+
Niki Parmar∗\\
|
5 |
+
Jakob Uszkoreit∗\\
|
6 |
+
Google Brain\\
|
7 |
+
Google Brain\\
|
8 |
+
Google Research\\
|
9 |
+
Google Research\\
|
10 |
+
avaswani@google.com\\
|
11 |
+
noam@google.com\\
|
12 |
+
nikip@google.com\\
|
13 |
+
usz@google.com\\
|
14 |
+
Llion Jones∗\\
|
15 |
+
Aidan N. Gomez∗ †\\
|
16 |
+
Łukasz Kaiser∗\\
|
17 |
+
Google Research\\
|
18 |
+
University of Toronto\\
|
19 |
+
Google Brain\\
|
20 |
+
llion@google.com\\
|
21 |
+
aidan@cs.toronto.edu\\
|
22 |
+
lukaszkaiser@google.com\\
|
23 |
+
Illia Polosukhin∗ ‡\\
|
24 |
+
illia.polosukhin@gmail.com\\
|
25 |
+
Abstract\\
|
26 |
+
The dominant sequence transduction models are based on complex recurrent or\\
|
27 |
+
convolutional neural networks that include an encoder and a decoder. The best\\
|
28 |
+
performing models also connect the encoder and decoder through an attention\\
|
29 |
+
mechanism. We propose a new simple network architecture, the Transformer,\\
|
30 |
+
based solely on attention mechanisms, dispensing with recurrence and convolutions\\
|
31 |
+
entirely. Experiments on two machine translation tasks show these models to\\
|
32 |
+
be superior in quality while being more parallelizable and requiring significantly\\
|
33 |
+
less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-\\
|
34 |
+
to-German translation task, improving over the existing best results, including\\
|
35 |
+
ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,\\
|
36 |
+
our model establishes a new single-model state-of-the-art BLEU score of 41.0 after\\
|
37 |
+
training for 3.5 days on eight GPUs, a small fraction of the training costs of the\\
|
38 |
+
best models from the literature.\\
|
39 |
+
1\\
|
40 |
+
Introduction\\
|
41 |
+
Recurrent neural networks, long short-term memory [[][[12] ]]and gated recurrent [[][[7] ]]neural networks\\
|
42 |
+
in particular, have been firmly established as state of the art approaches in sequence modeling and\\
|
43 |
+
transduction problems such as language modeling and machine translation [[][[29, 2, 5]. ]]Numerous\\
|
44 |
+
efforts have since continued to push the boundaries of recurrent language models and encoder-decoder\\
|
45 |
+
architectures [[][[31, 21, 13].]]\\
|
46 |
+
∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started\\
|
47 |
+
the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and\\
|
48 |
+
has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head\\
|
49 |
+
attention and the parameter-free position representation and became the other person involved in nearly every\\
|
50 |
+
detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and\\
|
51 |
+
tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and\\
|
52 |
+
efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and\\
|
53 |
+
implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating\\
|
54 |
+
our research.\\
|
55 |
+
†Work performed while at Google Brain.\\
|
56 |
+
‡Work performed while at Google Research.\\
|
57 |
+
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\\
|
58 |
+
|
59 |
+
--------------
|
60 |
+
|
61 |
+
<<2>>Recurrent models typically factor computation along the symbol positions of the input and output\\
|
62 |
+
sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden\\
|
63 |
+
states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently\\
|
64 |
+
sequential nature precludes parallelization within training examples, which becomes critical at longer\\
|
65 |
+
sequence lengths, as memory constraints limit batching across examples. Recent work has achieved\\
|
66 |
+
significant improvements in computational efficiency through factorization tricks [[][[18] ]]and conditional\\
|
67 |
+
computation [[][[26], ]]while also improving model performance in case of the latter. The fundamental\\
|
68 |
+
constraint of sequential computation, however, remains.\\
|
69 |
+
Attention mechanisms have become an integral part of compelling sequence modeling and transduc-\\
|
70 |
+
tion models in various tasks, allowing modeling of dependencies without regard to their distance in\\
|
71 |
+
the input or output sequences [[][[2, 16]. ]]In all but a few cases [[][[22], ]]however, such attention mechanisms\\
|
72 |
+
are used in conjunction with a recurrent network.\\
|
73 |
+
In this work we propose the Transformer, a model architecture eschewing recurrence and instead\\
|
74 |
+
relying entirely on an attention mechanism to draw global dependencies between input and output.\\
|
75 |
+
The Transformer allows for significantly more parallelization and can reach a new state of the art in\\
|
76 |
+
translation quality after being trained for as little as twelve hours on eight P100 GPUs.\\
|
77 |
+
2\\
|
78 |
+
Background\\
|
79 |
+
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU\\
|
80 |
+
[[][[20], ]]ByteNet [[][[15] ]]and ConvS2S [[][[8], ]]all of which use convolutional neural networks as basic building\\
|
81 |
+
block, computing hidden representations in parallel for all input and output positions. In these models,\\
|
82 |
+
the number of operations required to relate signals from two arbitrary input or output positions grows\\
|
83 |
+
in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes\\
|
84 |
+
it more difficult to learn dependencies between distant positions [[][[11]. ]]In the Transformer this is\\
|
85 |
+
reduced to a constant number of operations, albeit at the cost of reduced effective resolution due\\
|
86 |
+
to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as\\
|
87 |
+
described in section [[][3.2.]]\\
|
88 |
+
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions\\
|
89 |
+
of a single sequence in order to compute a representation of the sequence. Self-attention has been\\
|
90 |
+
used successfully in a variety of tasks including reading comprehension, abstractive summarization,\\
|
91 |
+
textual entailment and learning task-independent sentence representations [[][[4, 22, 23, 19].]]\\
|
92 |
+
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-\\
|
93 |
+
aligned recurrence and have been shown to perform well on simple-language question answering and\\
|
94 |
+
language modeling tasks [[][[28].]]\\
|
95 |
+
To the best of our knowledge, however, the Transformer is the first transduction model relying\\
|
96 |
+
entirely on self-attention to compute representations of its input and output without using sequence-\\
|
97 |
+
aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate\\
|
98 |
+
self-attention and discuss its advantages over models such as [[][[14, 15] ]]and [[][[8].]]\\
|
99 |
+
3\\
|
100 |
+
Model Architecture\\
|
101 |
+
Most competitive neural sequence transduction models have an encoder-decoder structure [[][[5, 2, 29].\\
|
102 |
+
]]Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence\\
|
103 |
+
of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output\\
|
104 |
+
sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive\\
|
105 |
+
[[][[9], ]]consuming the previously generated symbols as additional input when generating the next.\\
|
106 |
+
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully\\
|
107 |
+
connected layers for both the encoder and decoder, shown in the left and right halves of Figure [[][1,\\
|
108 |
+
]]respectively.\\
|
109 |
+
3.1\\
|
110 |
+
Encoder and Decoder Stacks\\
|
111 |
+
Encoder:\\
|
112 |
+
The encoder is composed of a stack of N = 6 identical layers. Each layer has two\\
|
113 |
+
sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-\\
|
114 |
+
2\\
|
115 |
+
|
116 |
+
--------------
|
117 |
+
|
118 |
+
<<3>>[[file:Attention%20is%20All%20You%20Need-3_1.png]]\\
|
119 |
+
Figure 1: The Transformer - model architecture.\\
|
120 |
+
wise fully connected feed-forward network. We employ a residual connection [[][[10] ]]around each of\\
|
121 |
+
the two sub-layers, followed by layer normalization [[][[1]. ]]That is, the output of each sub-layer is\\
|
122 |
+
LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer\\
|
123 |
+
itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding\\
|
124 |
+
layers, produce outputs of dimension dmodel = 512.\\
|
125 |
+
Decoder:\\
|
126 |
+
The decoder is also composed of a stack of N = 6 identical layers. In addition to the two\\
|
127 |
+
sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head\\
|
128 |
+
attention over the output of the encoder stack. Similar to the encoder, we employ residual connections\\
|
129 |
+
around each of the sub-layers, followed by layer normalization. We also modify the self-attention\\
|
130 |
+
sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This\\
|
131 |
+
masking, combined with fact that the output embeddings are offset by one position, ensures that the\\
|
132 |
+
predictions for position i can depend only on the known outputs at positions less than i.\\
|
133 |
+
3.2\\
|
134 |
+
Attention\\
|
135 |
+
An attention function can be described as mapping a query and a set of key-value pairs to an output,\\
|
136 |
+
where the query, keys, values, and output are all vectors. The output is computed as a weighted sum\\
|
137 |
+
of the values, where the weight assigned to each value is computed by a compatibility function of the\\
|
138 |
+
query with the corresponding key.\\
|
139 |
+
3.2.1\\
|
140 |
+
Scaled Dot-Product Attention\\
|
141 |
+
We call our particular attention "Scaled Dot-Product Attention" (Figure [[][2). ]]The input consists of\\
|
142 |
+
queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the\\
|
143 |
+
3\\
|
144 |
+
|
145 |
+
--------------
|
146 |
+
|
147 |
+
<<4>>[[file:Attention%20is%20All%20You%20Need-4_1.png]]\\
|
148 |
+
[[file:Attention%20is%20All%20You%20Need-4_2.png]]\\
|
149 |
+
Scaled Dot-Product Attention\\
|
150 |
+
Multi-Head Attention\\
|
151 |
+
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several\\
|
152 |
+
attention layers running in parallel.\\
|
153 |
+
√\\
|
154 |
+
query with all keys, divide each by\\
|
155 |
+
dk, and apply a softmax function to obtain the weights on the\\
|
156 |
+
values.\\
|
157 |
+
In practice, we compute the attention function on a set of queries simultaneously, packed together\\
|
158 |
+
into a matrix Q. The keys and values are also packed together into matrices K and V . We compute\\
|
159 |
+
the matrix of outputs as:\\
|
160 |
+
QKT\\
|
161 |
+
Attention(Q, K, V ) = softmax( √\\
|
162 |
+
)V\\
|
163 |
+
(1)\\
|
164 |
+
dk\\
|
165 |
+
The two most commonly used attention functions are additive attention [[][[2], ]]and dot-product (multi-\\
|
166 |
+
plicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor\\
|
167 |
+
of\\
|
168 |
+
1\\
|
169 |
+
√\\
|
170 |
+
. Additive attention computes the compatibility function using a feed-forward network with\\
|
171 |
+
dk\\
|
172 |
+
a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is\\
|
173 |
+
much faster and more space-efficient in practice, since it can be implemented using highly optimized\\
|
174 |
+
matrix multiplication code.\\
|
175 |
+
While for small values of dk the two mechanisms perform similarly, additive attention outperforms\\
|
176 |
+
dot product attention without scaling for larger values of dk [[][[3]. ]]We suspect that for large values of\\
|
177 |
+
dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has\\
|
178 |
+
extremely small gradients [[][4. ]]To counteract this effect, we scale the dot products by\\
|
179 |
+
1\\
|
180 |
+
√\\
|
181 |
+
.\\
|
182 |
+
dk\\
|
183 |
+
3.2.2\\
|
184 |
+
Multi-Head Attention\\
|
185 |
+
Instead of performing a single attention function with dmodel-dimensional keys, values and queries,\\
|
186 |
+
we found it beneficial to linearly project the queries, keys and values h times with different, learned\\
|
187 |
+
linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of\\
|
188 |
+
queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional\\
|
189 |
+
output values. These are concatenated and once again projected, resulting in the final values, as\\
|
190 |
+
depicted in Figure [[][2.]]\\
|
191 |
+
Multi-head attention allows the model to jointly attend to information from different representation\\
|
192 |
+
subspaces at different positions. With a single attention head, averaging inhibits this.\\
|
193 |
+
4To illustrate why the dot products get large, assume that the components of q and k are independent random\\
|
194 |
+
Pd\\
|
195 |
+
variables with mean 0 and variance 1. Then their dot product, q\\
|
196 |
+
k\\
|
197 |
+
· k =\\
|
198 |
+
q\\
|
199 |
+
i=1 iki, has mean 0 and variance dk .\\
|
200 |
+
4\\
|
201 |
+
|
202 |
+
--------------
|
203 |
+
|
204 |
+
<<5>>MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O\\
|
205 |
+
where headi = Attention(QW Q, KW K\\
|
206 |
+
i\\
|
207 |
+
i , V W V\\
|
208 |
+
i )\\
|
209 |
+
Where the projections are parameter matrices W Q\\
|
210 |
+
i\\
|
211 |
+
∈ Rdmodel×dk , W K\\
|
212 |
+
i\\
|
213 |
+
∈ Rdmodel×dk , W V\\
|
214 |
+
i\\
|
215 |
+
∈ Rdmodel×dv\\
|
216 |
+
and W O ∈ Rhdv×dmodel.\\
|
217 |
+
In this work we employ h = 8 parallel attention layers, or heads. For each of these we use\\
|
218 |
+
dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost\\
|
219 |
+
is similar to that of single-head attention with full dimensionality.\\
|
220 |
+
3.2.3\\
|
221 |
+
Applications of Attention in our Model\\
|
222 |
+
The Transformer uses multi-head attention in three different ways:\\
|
223 |
+
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer,\\
|
224 |
+
and the memory keys and values come from the output of the encoder. This allows every\\
|
225 |
+
position in the decoder to attend over all positions in the input sequence. This mimics the\\
|
226 |
+
typical encoder-decoder attention mechanisms in sequence-to-sequence models such as\\
|
227 |
+
[[][[31, 2, 8].]]\\
|
228 |
+
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values\\
|
229 |
+
and queries come from the same place, in this case, the output of the previous layer in the\\
|
230 |
+
encoder. Each position in the encoder can attend to all positions in the previous layer of the\\
|
231 |
+
encoder.\\
|
232 |
+
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to\\
|
233 |
+
all positions in the decoder up to and including that position. We need to prevent leftward\\
|
234 |
+
information flow in the decoder to preserve the auto-regressive property. We implement this\\
|
235 |
+
inside of scaled dot-product attention by masking out (setting to −∞) all values in the input\\
|
236 |
+
of the softmax which correspond to illegal connections. See Figure [[][2.]]\\
|
237 |
+
3.3\\
|
238 |
+
Position-wise Feed-Forward Networks\\
|
239 |
+
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully\\
|
240 |
+
connected feed-forward network, which is applied to each position separately and identically. This\\
|
241 |
+
consists of two linear transformations with a ReLU activation in between.\\
|
242 |
+
FFN(x) = max(0, xW1 + b1)W2 + b2\\
|
243 |
+
(2)\\
|
244 |
+
While the linear transformations are the same across different positions, they use different parameters\\
|
245 |
+
from layer to layer. Another way of describing this is as two convolutions with kernel size 1.\\
|
246 |
+
The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality\\
|
247 |
+
dff = 2048.\\
|
248 |
+
3.4\\
|
249 |
+
Embeddings and Softmax\\
|
250 |
+
Similarly to other sequence transduction models, we use learned embeddings to convert the input\\
|
251 |
+
tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor-\\
|
252 |
+
mation and softmax function to convert the decoder output to predicted next-token probabilities. In\\
|
253 |
+
our model, we share the same weight matrix between the two embedding layers and the pre-softmax\\
|
254 |
+
√\\
|
255 |
+
linear transformation, similar to [[][[24]. ]]In the embedding layers, we multiply those weights by\\
|
256 |
+
dmodel.\\
|
257 |
+
3.5\\
|
258 |
+
Positional Encoding\\
|
259 |
+
Since our model contains no recurrence and no convolution, in order for the model to make use of the\\
|
260 |
+
order of the sequence, we must inject some information about the relative or absolute position of the\\
|
261 |
+
tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the\\
|
262 |
+
5\\
|
263 |
+
|
264 |
+
--------------
|
265 |
+
|
266 |
+
<<6>>Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations\\
|
267 |
+
for different layer types. n is the sequence length, d is the representation dimension, k is the kernel\\
|
268 |
+
size of convolutions and r the size of the neighborhood in restricted self-attention.\\
|
269 |
+
Layer Type\\
|
270 |
+
Complexity per Layer\\
|
271 |
+
Sequential\\
|
272 |
+
Maximum Path Length\\
|
273 |
+
Operations\\
|
274 |
+
Self-Attention\\
|
275 |
+
O(n2 · d)\\
|
276 |
+
O(1)\\
|
277 |
+
O(1)\\
|
278 |
+
Recurrent\\
|
279 |
+
O(n · d2)\\
|
280 |
+
O(n)\\
|
281 |
+
O(n)\\
|
282 |
+
Convolutional\\
|
283 |
+
O(k · n · d2)\\
|
284 |
+
O(1)\\
|
285 |
+
O(logk(n))\\
|
286 |
+
Self-Attention (restricted)\\
|
287 |
+
O(r · n · d)\\
|
288 |
+
O(1)\\
|
289 |
+
O(n/r)\\
|
290 |
+
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel\\
|
291 |
+
as the embeddings, so that the two can be summed. There are many choices of positional encodings,\\
|
292 |
+
learned and fixed [[][[8].]]\\
|
293 |
+
In this work, we use sine and cosine functions of different frequencies:\\
|
294 |
+
P E(pos,2i) = sin(pos/100002i/dmodel)\\
|
295 |
+
P E(pos,2i+1) = cos(pos/100002i/dmodel)\\
|
296 |
+
where pos is the position and i is the dimension. That is, each dimension of the positional encoding\\
|
297 |
+
corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We\\
|
298 |
+
chose this function because we hypothesized it would allow the model to easily learn to attend by\\
|
299 |
+
relative positions, since for any fixed offset k, P Epos+k can be represented as a linear function of\\
|
300 |
+
P Epos.\\
|
301 |
+
We also experimented with using learned positional embeddings [[][[8] ]]instead, and found that the two\\
|
302 |
+
versions produced nearly identical results (see Table [[][3 ]]row (E)). We chose the sinusoidal version\\
|
303 |
+
because it may allow the model to extrapolate to sequence lengths longer than the ones encountered\\
|
304 |
+
during training.\\
|
305 |
+
4\\
|
306 |
+
Why Self-Attention\\
|
307 |
+
In this section we compare various aspects of self-attention layers to the recurrent and convolu-\\
|
308 |
+
tional layers commonly used for mapping one variable-length sequence of symbol representations\\
|
309 |
+
(x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈ Rd, such as a hidden\\
|
310 |
+
layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we\\
|
311 |
+
consider three desiderata.\\
|
312 |
+
One is the total computational complexity per layer. Another is the amount of computation that can\\
|
313 |
+
be parallelized, as measured by the minimum number of sequential operations required.\\
|
314 |
+
The third is the path length between long-range dependencies in the network. Learning long-range\\
|
315 |
+
dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the\\
|
316 |
+
ability to learn such dependencies is the length of the paths forward and backward signals have to\\
|
317 |
+
traverse in the network. The shorter these paths between any combination of positions in the input\\
|
318 |
+
and output sequences, the easier it is to learn long-range dependencies [[][[11]. ]]Hence we also compare\\
|
319 |
+
the maximum path length between any two input and output positions in networks composed of the\\
|
320 |
+
different layer types.\\
|
321 |
+
As noted in Table [[][1, ]]a self-attention layer connects all positions with a constant number of sequentially\\
|
322 |
+
executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of\\
|
323 |
+
computational complexity, self-attention layers are faster than recurrent layers when the sequence\\
|
324 |
+
length n is smaller than the representation dimensionality d, which is most often the case with\\
|
325 |
+
sentence representations used by state-of-the-art models in machine translations, such as word-piece\\
|
326 |
+
[[][[31] ]]and byte-pair [[][[25] ]]representations. To improve computational performance for tasks involving\\
|
327 |
+
very long sequences, self-attention could be restricted to considering only a neighborhood of size r in\\
|
328 |
+
6\\
|
329 |
+
|
330 |
+
--------------
|
331 |
+
|
332 |
+
<<7>>the input sequence centered around the respective output position. This would increase the maximum\\
|
333 |
+
path length to O(n/r). We plan to investigate this approach further in future work.\\
|
334 |
+
A single convolutional layer with kernel width k < n does not connect all pairs of input and output\\
|
335 |
+
positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels,\\
|
336 |
+
or O(logk(n)) in the case of dilated convolutions [[][[15], ]]increasing the length of the longest paths\\
|
337 |
+
between any two positions in the network. Convolutional layers are generally more expensive than\\
|
338 |
+
recurrent layers, by a factor of k. Separable convolutions [[][[6], ]]however, decrease the complexity\\
|
339 |
+
considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable\\
|
340 |
+
convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer,\\
|
341 |
+
the approach we take in our model.\\
|
342 |
+
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions\\
|
343 |
+
from our models and present and discuss examples in the appendix. Not only do individual attention\\
|
344 |
+
heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic\\
|
345 |
+
and semantic structure of the sentences.\\
|
346 |
+
5\\
|
347 |
+
Training\\
|
348 |
+
This section describes the training regime for our models.\\
|
349 |
+
5.1\\
|
350 |
+
Training Data and Batching\\
|
351 |
+
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million\\
|
352 |
+
sentence pairs. Sentences were encoded using byte-pair encoding [[][[3], ]]which has a shared source-\\
|
353 |
+
target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT\\
|
354 |
+
2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece\\
|
355 |
+
vocabulary [[][[31]. ]]Sentence pairs were batched together by approximate sequence length. Each training\\
|
356 |
+
batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000\\
|
357 |
+
target tokens.\\
|
358 |
+
5.2\\
|
359 |
+
Hardware and Schedule\\
|
360 |
+
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using\\
|
361 |
+
the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We\\
|
362 |
+
trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the\\
|
363 |
+
bottom line of table [[][3), ]]step time was 1.0 seconds. The big models were trained for 300,000 steps\\
|
364 |
+
(3.5 days).\\
|
365 |
+
5.3\\
|
366 |
+
Optimizer\\
|
367 |
+
We used the Adam optimizer [[][[17] ]]with β1 = 0.9, β2 = 0.98 and ǫ = 10−9. We varied the learning\\
|
368 |
+
rate over the course of training, according to the formula:\\
|
369 |
+
lrate = d−0.5\\
|
370 |
+
model · min(step_num−0.5, step_num · warmup_steps−1.5)\\
|
371 |
+
(3)\\
|
372 |
+
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps,\\
|
373 |
+
and decreasing it thereafter proportionally to the inverse square root of the step number. We used\\
|
374 |
+
warmup_steps = 4000.\\
|
375 |
+
5.4\\
|
376 |
+
Regularization\\
|
377 |
+
We employ three types of regularization during training:\\
|
378 |
+
Residual Dropout\\
|
379 |
+
We apply dropout [[][[27] ]]to the output of each sub-layer, before it is added to the\\
|
380 |
+
sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the\\
|
381 |
+
positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of\\
|
382 |
+
Pdrop = 0.1.\\
|
383 |
+
7\\
|
384 |
+
|
385 |
+
--------------
|
386 |
+
|
387 |
+
<<8>>Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the\\
|
388 |
+
English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.\\
|
389 |
+
BLEU\\
|
390 |
+
Training Cost (FLOPs)\\
|
391 |
+
Model\\
|
392 |
+
EN-DE\\
|
393 |
+
EN-FR\\
|
394 |
+
EN-DE\\
|
395 |
+
EN-FR\\
|
396 |
+
ByteNet [[][[15]]]\\
|
397 |
+
23.75\\
|
398 |
+
Deep-Att + PosUnk [[][[32]]]\\
|
399 |
+
39.2\\
|
400 |
+
1.0 · 1020\\
|
401 |
+
GNMT + RL [[][[31]]]\\
|
402 |
+
24.6\\
|
403 |
+
39.92\\
|
404 |
+
2.3 · 1019\\
|
405 |
+
1.4 · 1020\\
|
406 |
+
ConvS2S [[][[8]]]\\
|
407 |
+
25.16\\
|
408 |
+
40.46\\
|
409 |
+
9.6 · 1018\\
|
410 |
+
1.5 · 1020\\
|
411 |
+
MoE [[][[26]]]\\
|
412 |
+
26.03\\
|
413 |
+
40.56\\
|
414 |
+
2.0 · 1019\\
|
415 |
+
1.2 · 1020\\
|
416 |
+
Deep-Att + PosUnk Ensemble [[][[32]]]\\
|
417 |
+
40.4\\
|
418 |
+
8.0 · 1020\\
|
419 |
+
GNMT + RL Ensemble [[][[31]]]\\
|
420 |
+
26.30\\
|
421 |
+
41.16\\
|
422 |
+
1.8 · 1020\\
|
423 |
+
1.1 · 1021\\
|
424 |
+
ConvS2S Ensemble [[][[8]]]\\
|
425 |
+
26.36\\
|
426 |
+
41.29\\
|
427 |
+
7.7 · 1019\\
|
428 |
+
1.2 · 1021\\
|
429 |
+
Transformer (base model)\\
|
430 |
+
27.3\\
|
431 |
+
38.1\\
|
432 |
+
3 3\\
|
433 |
+
.\\
|
434 |
+
· 1018\\
|
435 |
+
Transformer (big)\\
|
436 |
+
28.4\\
|
437 |
+
41.0\\
|
438 |
+
2.3 · 1019\\
|
439 |
+
Label Smoothing\\
|
440 |
+
During training, we employed label smoothing of value ǫls = 0.1 [[][[30]. ]]This\\
|
441 |
+
hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.\\
|
442 |
+
6\\
|
443 |
+
Results\\
|
444 |
+
6.1\\
|
445 |
+
Machine Translation\\
|
446 |
+
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big)\\
|
447 |
+
in Table [[][2) ]]outperforms the best previously reported models (including ensembles) by more than 2.0\\
|
448 |
+
BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is\\
|
449 |
+
listed in the bottom line of Table [[][3. ]]Training took 3.5 days on 8 P100 GPUs. Even our base model\\
|
450 |
+
surpasses all previously published models and ensembles, at a fraction of the training cost of any of\\
|
451 |
+
the competitive models.\\
|
452 |
+
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0,\\
|
453 |
+
outperforming all of the previously published single models, at less than 1/4 the training cost of the\\
|
454 |
+
previous state-of-the-art model. The Transformer (big) model trained for English-to-French used\\
|
455 |
+
dropout rate Pdrop = 0.1, instead of 0.3.\\
|
456 |
+
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which\\
|
457 |
+
were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We\\
|
458 |
+
used beam search with a beam size of 4 and length penalty α = 0.6 [[][[31]. ]]These hyperparameters\\
|
459 |
+
were chosen after experimentation on the development set. We set the maximum output length during\\
|
460 |
+
inference to input length + 50, but terminate early when possible [[][[31].]]\\
|
461 |
+
Table [[][2 ]]summarizes our results and compares our translation quality and training costs to other model\\
|
462 |
+
architectures from the literature. We estimate the number of floating point operations used to train a\\
|
463 |
+
model by multiplying the training time, the number of GPUs used, and an estimate of the sustained\\
|
464 |
+
single-precision floating-point capacity of each GPU [[][5.]]\\
|
465 |
+
6.2\\
|
466 |
+
Model Variations\\
|
467 |
+
To evaluate the importance of different components of the Transformer, we varied our base model\\
|
468 |
+
in different ways, measuring the change in performance on English-to-German translation on the\\
|
469 |
+
development set, newstest2013. We used beam search as described in the previous section, but no\\
|
470 |
+
checkpoint averaging. We present these results in Table [[][3.]]\\
|
471 |
+
In Table [[][3 ]]rows (A), we vary the number of attention heads and the attention key and value dimensions,\\
|
472 |
+
keeping the amount of computation constant, as described in Section [[][3.2.2. ]]While single-head\\
|
473 |
+
attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.\\
|
474 |
+
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.\\
|
475 |
+
8\\
|
476 |
+
|
477 |
+
--------------
|
478 |
+
|
479 |
+
<<9>>Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base\\
|
480 |
+
model. All metrics are on the English-to-German translation development set, newstest2013. Listed\\
|
481 |
+
perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to\\
|
482 |
+
per-word perplexities.\\
|
483 |
+
train\\
|
484 |
+
PPL\\
|
485 |
+
BLEU\\
|
486 |
+
params\\
|
487 |
+
N\\
|
488 |
+
dmodel\\
|
489 |
+
dff\\
|
490 |
+
h\\
|
491 |
+
dk\\
|
492 |
+
dv\\
|
493 |
+
Pdrop\\
|
494 |
+
ǫls\\
|
495 |
+
steps\\
|
496 |
+
(dev)\\
|
497 |
+
(dev)\\
|
498 |
+
×106\\
|
499 |
+
base\\
|
500 |
+
6\\
|
501 |
+
512\\
|
502 |
+
2048\\
|
503 |
+
8\\
|
504 |
+
64\\
|
505 |
+
64\\
|
506 |
+
0.1\\
|
507 |
+
0.1\\
|
508 |
+
100K\\
|
509 |
+
4.92\\
|
510 |
+
25.8\\
|
511 |
+
65\\
|
512 |
+
1\\
|
513 |
+
512\\
|
514 |
+
512\\
|
515 |
+
5.29\\
|
516 |
+
24.9\\
|
517 |
+
4\\
|
518 |
+
128\\
|
519 |
+
128\\
|
520 |
+
5.00\\
|
521 |
+
25.5\\
|
522 |
+
(A)\\
|
523 |
+
16\\
|
524 |
+
32\\
|
525 |
+
32\\
|
526 |
+
4.91\\
|
527 |
+
25.8\\
|
528 |
+
32\\
|
529 |
+
16\\
|
530 |
+
16\\
|
531 |
+
5.01\\
|
532 |
+
25.4\\
|
533 |
+
16\\
|
534 |
+
5.16\\
|
535 |
+
25.1\\
|
536 |
+
58\\
|
537 |
+
(B)\\
|
538 |
+
32\\
|
539 |
+
5.01\\
|
540 |
+
25.4\\
|
541 |
+
60\\
|
542 |
+
2\\
|
543 |
+
6.11\\
|
544 |
+
23.7\\
|
545 |
+
36\\
|
546 |
+
4\\
|
547 |
+
5.19\\
|
548 |
+
25.3\\
|
549 |
+
50\\
|
550 |
+
8\\
|
551 |
+
4.88\\
|
552 |
+
25.5\\
|
553 |
+
80\\
|
554 |
+
(C)\\
|
555 |
+
256\\
|
556 |
+
32\\
|
557 |
+
32\\
|
558 |
+
5.75\\
|
559 |
+
24.5\\
|
560 |
+
28\\
|
561 |
+
1024\\
|
562 |
+
128\\
|
563 |
+
128\\
|
564 |
+
4.66\\
|
565 |
+
26.0\\
|
566 |
+
168\\
|
567 |
+
1024\\
|
568 |
+
5.12\\
|
569 |
+
25.4\\
|
570 |
+
53\\
|
571 |
+
4096\\
|
572 |
+
4.75\\
|
573 |
+
26.2\\
|
574 |
+
90\\
|
575 |
+
0.0\\
|
576 |
+
5.77\\
|
577 |
+
24.6\\
|
578 |
+
0.2\\
|
579 |
+
4.95\\
|
580 |
+
25.5\\
|
581 |
+
(D)\\
|
582 |
+
0.0\\
|
583 |
+
4.67\\
|
584 |
+
25.3\\
|
585 |
+
0.2\\
|
586 |
+
5.47\\
|
587 |
+
25.7\\
|
588 |
+
(E)\\
|
589 |
+
positional embedding instead of sinusoids\\
|
590 |
+
4.92\\
|
591 |
+
25.7\\
|
592 |
+
big\\
|
593 |
+
6\\
|
594 |
+
1024\\
|
595 |
+
4096\\
|
596 |
+
16\\
|
597 |
+
0.3\\
|
598 |
+
300K\\
|
599 |
+
4.33\\
|
600 |
+
26.4\\
|
601 |
+
213\\
|
602 |
+
In Table [[][3 ]]rows (B), we observe that reducing the attention key size dk hurts model quality. This\\
|
603 |
+
suggests that determining compatibility is not easy and that a more sophisticated compatibility\\
|
604 |
+
function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected,\\
|
605 |
+
bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our\\
|
606 |
+
sinusoidal positional encoding with learned positional embeddings [[][[8], ]]and observe nearly identical\\
|
607 |
+
results to the base model.\\
|
608 |
+
7\\
|
609 |
+
Conclusion\\
|
610 |
+
In this work, we presented the Transformer, the first sequence transduction model based entirely on\\
|
611 |
+
attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with\\
|
612 |
+
multi-headed self-attention.\\
|
613 |
+
For translation tasks, the Transformer can be trained significantly faster than architectures based\\
|
614 |
+
on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014\\
|
615 |
+
English-to-French translation tasks, we achieve a new state of the art. In the former task our best\\
|
616 |
+
model outperforms even all previously reported ensembles.\\
|
617 |
+
We are excited about the future of attention-based models and plan to apply them to other tasks. We\\
|
618 |
+
plan to extend the Transformer to problems involving input and output modalities other than text and\\
|
619 |
+
to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs\\
|
620 |
+
such as images, audio and video. Making generation less sequential is another research goals of ours.\\
|
621 |
+
The code we used to train and evaluate our models is available at [[https://github.com/tensorflow/tensor2tensor][https://github.com/]]\\
|
622 |
+
[[https://github.com/tensorflow/tensor2tensor][tensorflow/tensor2tensor.]]\\
|
623 |
+
Acknowledgements\\
|
624 |
+
We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful\\
|
625 |
+
comments, corrections and inspiration.\\
|
626 |
+
9\\
|
627 |
+
|
628 |
+
--------------
|
629 |
+
|
630 |
+
<<10>>References\\
|
631 |
+
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. /arXiv preprint/\\
|
632 |
+
/arXiv:1607.06450/, 2016.\\
|
633 |
+
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\\
|
634 |
+
learning to align and translate. /CoRR/, abs/1409.0473, 2014.\\
|
635 |
+
[3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural\\
|
636 |
+
machine translation architectures. /CoRR/, abs/1703.03906, 2017.\\
|
637 |
+
[4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine\\
|
638 |
+
reading. /arXiv preprint arXiv:1601.06733/, 2016.\\
|
639 |
+
[5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,\\
|
640 |
+
and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical\\
|
641 |
+
machine translation. /CoRR/, abs/1406.1078, 2014.\\
|
642 |
+
[6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. /arXiv/\\
|
643 |
+
/preprint arXiv:1610.02357/, 2016.\\
|
644 |
+
[7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation\\
|
645 |
+
of gated recurrent neural networks on sequence modeling. /CoRR/, abs/1412.3555, 2014.\\
|
646 |
+
[8] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu-\\
|
647 |
+
tional sequence to sequence learning. /arXiv preprint arXiv:1705.03122v2/, 2017.\\
|
648 |
+
[9] Alex Graves.\\
|
649 |
+
Generating sequences with recurrent neural networks.\\
|
650 |
+
/arXiv preprint/\\
|
651 |
+
/arXiv:1308.0850/, 2013.\\
|
652 |
+
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im-\\
|
653 |
+
age recognition. In /Proceedings of the IEEE Conference on Computer Vision and Pattern/\\
|
654 |
+
/Recognition/, pages 770--778, 2016.\\
|
655 |
+
[11] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in\\
|
656 |
+
recurrent nets: the difficulty of learning long-term dependencies, 2001.\\
|
657 |
+
[12] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. /Neural computation/,\\
|
658 |
+
9(8):1735--1780, 1997.\\
|
659 |
+
[13] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring\\
|
660 |
+
the limits of language modeling. /arXiv preprint arXiv:1602.02410/, 2016.\\
|
661 |
+
[14] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In /International Conference/\\
|
662 |
+
/on Learning Representations (ICLR)/, 2016.\\
|
663 |
+
[15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko-\\
|
664 |
+
ray Kavukcuoglu. Neural machine translation in linear time. /arXiv preprint arXiv:1610.10099v2/,\\
|
665 |
+
2017.\\
|
666 |
+
[16] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks.\\
|
667 |
+
In /International Conference on Learning Representations/, 2017.\\
|
668 |
+
[17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In /ICLR/, 2015.\\
|
669 |
+
[18] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. /arXiv preprint/\\
|
670 |
+
/arXiv:1703.10722/, 2017.\\
|
671 |
+
[19] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen\\
|
672 |
+
Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. /arXiv preprint\\
|
673 |
+
arXiv:1703.03130/, 2017.\\
|
674 |
+
[20] Samy Bengio Łukasz Kaiser. Can active memory replace attention? In /Advances in Neural/\\
|
675 |
+
/Information Processing Systems, (NIPS)/, 2016.\\
|
676 |
+
10\\
|
677 |
+
|
678 |
+
--------------
|
679 |
+
|
680 |
+
<<11>>[21] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-\\
|
681 |
+
based neural machine translation. /arXiv preprint arXiv:1508.04025/, 2015.\\
|
682 |
+
[22] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention\\
|
683 |
+
model. In /Empirical Methods in Natural Language Processing/, 2016.\\
|
684 |
+
[23] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive\\
|
685 |
+
summarization. /arXiv preprint arXiv:1705.04304/, 2017.\\
|
686 |
+
[24] Ofir Press and Lior Wolf. Using the output embedding to improve language models. /arXiv/\\
|
687 |
+
/preprint arXiv:1608.05859/, 2016.\\
|
688 |
+
[25] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words\\
|
689 |
+
with subword units. /arXiv preprint arXiv:1508.07909/, 2015.\\
|
690 |
+
[26] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,\\
|
691 |
+
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts\\
|
692 |
+
layer. /arXiv preprint arXiv:1701.06538/, 2017.\\
|
693 |
+
[27] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-\\
|
694 |
+
nov. Dropout: a simple way to prevent neural networks from overfitting. /Journal of Machine/\\
|
695 |
+
/Learning Research/, 15(1):1929--1958, 2014.\\
|
696 |
+
[28] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory\\
|
697 |
+
networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,\\
|
698 |
+
/Advances in Neural Information Processing Systems 28/, pages 2440--2448. Curran Associates,\\
|
699 |
+
Inc., 2015.\\
|
700 |
+
[29] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural\\
|
701 |
+
networks. In /Advances in Neural Information Processing Systems/, pages 3104--3112, 2014.\\
|
702 |
+
[30] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.\\
|
703 |
+
Rethinking the inception architecture for computer vision. /CoRR/, abs/1512.00567, 2015.\\
|
704 |
+
[31] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang\\
|
705 |
+
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine\\
|
706 |
+
translation system: Bridging the gap between human and machine translation. /arXiv preprint\\
|
707 |
+
arXiv:1609.08144/, 2016.\\
|
708 |
+
[32] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with\\
|
709 |
+
fast-forward connections for neural machine translation. /CoRR/, abs/1606.04199, 2016.\\
|
710 |
+
11\\
|
711 |
+
|
712 |
+
--------------
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need.pdf
ADDED
Binary file (404 kB). View file
|
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Need_ind.html
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html><html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
|
2 |
+
<head>
|
3 |
+
<title></title>
|
4 |
+
</head>
|
5 |
+
<body>
|
6 |
+
<a href="Attention is All You Needs.html#1" target="contents" >Page 1</a><br/>
|
7 |
+
<a href="Attention is All You Needs.html#2" target="contents" >Page 2</a><br/>
|
8 |
+
<a href="Attention is All You Needs.html#3" target="contents" >Page 3</a><br/>
|
9 |
+
<a href="Attention is All You Needs.html#4" target="contents" >Page 4</a><br/>
|
10 |
+
<a href="Attention is All You Needs.html#5" target="contents" >Page 5</a><br/>
|
11 |
+
<a href="Attention is All You Needs.html#6" target="contents" >Page 6</a><br/>
|
12 |
+
<a href="Attention is All You Needs.html#7" target="contents" >Page 7</a><br/>
|
13 |
+
<a href="Attention is All You Needs.html#8" target="contents" >Page 8</a><br/>
|
14 |
+
<a href="Attention is All You Needs.html#9" target="contents" >Page 9</a><br/>
|
15 |
+
<a href="Attention is All You Needs.html#10" target="contents" >Page 10</a><br/>
|
16 |
+
<a href="Attention is All You Needs.html#11" target="contents" >Page 11</a><br/>
|
17 |
+
</body>
|
18 |
+
</html>
|
2017/12/04/USA/LongBeachCA/NIPS2017/Attention is All You Needs.html
ADDED
@@ -0,0 +1,558 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html><html>
|
2 |
+
<head>
|
3 |
+
<title></title>
|
4 |
+
<style type="text/css">
|
5 |
+
<!--
|
6 |
+
.xflip {
|
7 |
+
-moz-transform: scaleX(-1);
|
8 |
+
-webkit-transform: scaleX(-1);
|
9 |
+
-o-transform: scaleX(-1);
|
10 |
+
transform: scaleX(-1);
|
11 |
+
filter: fliph;
|
12 |
+
}
|
13 |
+
.yflip {
|
14 |
+
-moz-transform: scaleY(-1);
|
15 |
+
-webkit-transform: scaleY(-1);
|
16 |
+
-o-transform: scaleY(-1);
|
17 |
+
transform: scaleY(-1);
|
18 |
+
filter: flipv;
|
19 |
+
}
|
20 |
+
.xyflip {
|
21 |
+
-moz-transform: scaleX(-1) scaleY(-1);
|
22 |
+
-webkit-transform: scaleX(-1) scaleY(-1);
|
23 |
+
-o-transform: scaleX(-1) scaleY(-1);
|
24 |
+
transform: scaleX(-1) scaleY(-1);
|
25 |
+
filter: fliph + flipv;
|
26 |
+
}
|
27 |
+
-->
|
28 |
+
</style>
|
29 |
+
</head>
|
30 |
+
<body>
|
31 |
+
<a name=1></a>Attention Is All You Need<br/>
|
32 |
+
Ashish Vaswani∗<br/>
|
33 |
+
Noam Shazeer∗<br/>
|
34 |
+
Niki Parmar∗<br/>
|
35 |
+
Jakob Uszkoreit∗<br/>
|
36 |
+
Google Brain<br/>
|
37 |
+
Google Brain<br/>
|
38 |
+
Google Research<br/>
|
39 |
+
Google Research<br/>
|
40 |
+
avaswani@google.com<br/>
|
41 |
+
noam@google.com<br/>
|
42 |
+
nikip@google.com<br/>
|
43 |
+
usz@google.com<br/>
|
44 |
+
Llion Jones∗<br/>
|
45 |
+
Aidan N. Gomez∗ †<br/>
|
46 |
+
Łukasz Kaiser∗<br/>
|
47 |
+
Google Research<br/>
|
48 |
+
University of Toronto<br/>
|
49 |
+
Google Brain<br/>
|
50 |
+
llion@google.com<br/>
|
51 |
+
aidan@cs.toronto.edu<br/>
|
52 |
+
lukaszkaiser@google.com<br/>
|
53 |
+
Illia Polosukhin∗ ‡<br/>
|
54 |
+
illia.polosukhin@gmail.com<br/>
|
55 |
+
Abstract<br/>
|
56 |
+
The dominant sequence transduction models are based on complex recurrent or<br/>convolutional neural networks that include an encoder and a decoder. The best<br/>performing models also connect the encoder and decoder through an attention<br/>mechanism. We propose a new simple network architecture, the Transformer,<br/>based solely on attention mechanisms, dispensing with recurrence and convolutions<br/>entirely. Experiments on two machine translation tasks show these models to<br/>be superior in quality while being more parallelizable and requiring significantly<br/>less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-<br/>to-German translation task, improving over the existing best results, including<br/>ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,<br/>our model establishes a new single-model state-of-the-art BLEU score of 41.0 after<br/>training for 3.5 days on eight GPUs, a small fraction of the training costs of the<br/>best models from the literature.<br/>
|
57 |
+
1<br/>
|
58 |
+
Introduction<br/>
|
59 |
+
Recurrent neural networks, long short-term memory <a href="">[12] </a>and gated recurrent <a href="">[7] </a>neural networks<br/>in particular, have been firmly established as state of the art approaches in sequence modeling and<br/>transduction problems such as language modeling and machine translation <a href="">[29, 2, 5]. </a>Numerous<br/>efforts have since continued to push the boundaries of recurrent language models and encoder-decoder<br/>architectures <a href="">[31, 21, 13].</a><br/>
|
60 |
+
∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started<br/>
|
61 |
+
the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and<br/>has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head<br/>attention and the parameter-free position representation and became the other person involved in nearly every<br/>detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and<br/>tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and<br/>efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and<br/>implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating<br/>our research.<br/>
|
62 |
+
†Work performed while at Google Brain.<br/>‡Work performed while at Google Research.<br/>
|
63 |
+
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.<br/>
|
64 |
+
<hr/>
|
65 |
+
<a name=2></a>Recurrent models typically factor computation along the symbol positions of the input and output<br/>sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden<br/>states ht, as a function of the previous hidden state ht−1 and the input for position t. This inherently<br/>
|
66 |
+
sequential nature precludes parallelization within training examples, which becomes critical at longer<br/>sequence lengths, as memory constraints limit batching across examples. Recent work has achieved<br/>significant improvements in computational efficiency through factorization tricks <a href="">[18] </a>and conditional<br/>computation <a href="">[26], </a>while also improving model performance in case of the latter. The fundamental<br/>constraint of sequential computation, however, remains.<br/>
|
67 |
+
Attention mechanisms have become an integral part of compelling sequence modeling and transduc-<br/>
|
68 |
+
tion models in various tasks, allowing modeling of dependencies without regard to their distance in<br/>the input or output sequences <a href="">[2, 16]. </a>In all but a few cases <a href="">[22], </a>however, such attention mechanisms<br/>are used in conjunction with a recurrent network.<br/>
|
69 |
+
In this work we propose the Transformer, a model architecture eschewing recurrence and instead<br/>relying entirely on an attention mechanism to draw global dependencies between input and output.<br/>
|
70 |
+
The Transformer allows for significantly more parallelization and can reach a new state of the art in<br/>
|
71 |
+
translation quality after being trained for as little as twelve hours on eight P100 GPUs.<br/>
|
72 |
+
2<br/>
|
73 |
+
Background<br/>
|
74 |
+
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU<br/>
|
75 |
+
<a href="">[20], </a>ByteNet <a href="">[15] </a>and ConvS2S <a href="">[8], </a>all of which use convolutional neural networks as basic building<br/>block, computing hidden representations in parallel for all input and output positions. In these models,<br/>the number of operations required to relate signals from two arbitrary input or output positions grows<br/>in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes<br/>it more difficult to learn dependencies between distant positions <a href="">[11]. </a>In the Transformer this is<br/>reduced to a constant number of operations, albeit at the cost of reduced effective resolution due<br/>to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as<br/>described in section <a href="">3.2.</a><br/>
|
76 |
+
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions<br/>of a single sequence in order to compute a representation of the sequence. Self-attention has been<br/>used successfully in a variety of tasks including reading comprehension, abstractive summarization,<br/>textual entailment and learning task-independent sentence representations <a href="">[4, 22, 23, 19].</a><br/>
|
77 |
+
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-<br/>aligned recurrence and have been shown to perform well on simple-language question answering and<br/>language modeling tasks <a href="">[28].</a><br/>
|
78 |
+
To the best of our knowledge, however, the Transformer is the first transduction model relying<br/>
|
79 |
+
entirely on self-attention to compute representations of its input and output without using sequence-<br/>aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate<br/>self-attention and discuss its advantages over models such as <a href="">[14, 15] </a>and <a href="">[8].</a><br/>
|
80 |
+
3<br/>
|
81 |
+
Model Architecture<br/>
|
82 |
+
Most competitive neural sequence transduction models have an encoder-decoder structure <a href="">[5, 2, 29].<br/></a>Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence<br/>of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output<br/>sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive<br/><a href="">[9], </a>consuming the previously generated symbols as additional input when generating the next.<br/>
|
83 |
+
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully<br/>
|
84 |
+
connected layers for both the encoder and decoder, shown in the left and right halves of Figure <a href="">1,<br/></a>respectively.<br/>
|
85 |
+
3.1<br/>
|
86 |
+
Encoder and Decoder Stacks<br/>
|
87 |
+
Encoder:<br/>
|
88 |
+
The encoder is composed of a stack of N = 6 identical layers. Each layer has two<br/>
|
89 |
+
sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-<br/>
|
90 |
+
2<br/>
|
91 |
+
<hr/>
|
92 |
+
<a name=3></a><img src="Attention is All You Need-3_1.png"/><br/>
|
93 |
+
Figure 1: The Transformer - model architecture.<br/>
|
94 |
+
wise fully connected feed-forward network. We employ a residual connection <a href="">[10] </a>around each of<br/>
|
95 |
+
the two sub-layers, followed by layer normalization <a href="">[1]. </a>That is, the output of each sub-layer is<br/>LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer<br/>itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding<br/>layers, produce outputs of dimension dmodel = 512.<br/>
|
96 |
+
Decoder:<br/>
|
97 |
+
The decoder is also composed of a stack of N = 6 identical layers. In addition to the two<br/>
|
98 |
+
sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head<br/>attention over the output of the encoder stack. Similar to the encoder, we employ residual connections<br/>around each of the sub-layers, followed by layer normalization. We also modify the self-attention<br/>sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This<br/>masking, combined with fact that the output embeddings are offset by one position, ensures that the<br/>predictions for position i can depend only on the known outputs at positions less than i.<br/>
|
99 |
+
3.2<br/>
|
100 |
+
Attention<br/>
|
101 |
+
An attention function can be described as mapping a query and a set of key-value pairs to an output,<br/>where the query, keys, values, and output are all vectors. The output is computed as a weighted sum<br/>
|
102 |
+
of the values, where the weight assigned to each value is computed by a compatibility function of the<br/>query with the corresponding key.<br/>
|
103 |
+
3.2.1<br/>
|
104 |
+
Scaled Dot-Product Attention<br/>
|
105 |
+
We call our particular attention "Scaled Dot-Product Attention" (Figure <a href="">2). </a>The input consists of<br/>
|
106 |
+
queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the<br/>
|
107 |
+
3<br/>
|
108 |
+
<hr/>
|
109 |
+
<a name=4></a><img src="Attention is All You Need-4_1.png"/><br/>
|
110 |
+
<img src="Attention is All You Need-4_2.png"/><br/>
|
111 |
+
Scaled Dot-Product Attention<br/>
|
112 |
+
Multi-Head Attention<br/>
|
113 |
+
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several<br/>attention layers running in parallel.<br/>
|
114 |
+
√<br/>
|
115 |
+
query with all keys, divide each by<br/>
|
116 |
+
dk, and apply a softmax function to obtain the weights on the<br/>
|
117 |
+
values.<br/>
|
118 |
+
In practice, we compute the attention function on a set of queries simultaneously, packed together<br/>into a matrix Q. The keys and values are also packed together into matrices K and V . We compute<br/>the matrix of outputs as:<br/>
|
119 |
+
QKT<br/>
|
120 |
+
Attention(Q, K, V ) = softmax( √<br/>
|
121 |
+
)V<br/>
|
122 |
+
(1)<br/>
|
123 |
+
dk<br/>
|
124 |
+
The two most commonly used attention functions are additive attention <a href="">[2], </a>and dot-product (multi-<br/>
|
125 |
+
plicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor<br/>of<br/>
|
126 |
+
1<br/>
|
127 |
+
√<br/>
|
128 |
+
. Additive attention computes the compatibility function using a feed-forward network with<br/>
|
129 |
+
dk<br/>
|
130 |
+
a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is<br/>much faster and more space-efficient in practice, since it can be implemented using highly optimized<br/>matrix multiplication code.<br/>
|
131 |
+
While for small values of dk the two mechanisms perform similarly, additive attention outperforms<br/>
|
132 |
+
dot product attention without scaling for larger values of dk <a href="">[3]. </a>We suspect that for large values of<br/>dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has<br/>extremely small gradients <a href="">4. </a>To counteract this effect, we scale the dot products by<br/>
|
133 |
+
1<br/>
|
134 |
+
√<br/>
|
135 |
+
.<br/>
|
136 |
+
dk<br/>
|
137 |
+
3.2.2<br/>
|
138 |
+
Multi-Head Attention<br/>
|
139 |
+
Instead of performing a single attention function with dmodel-dimensional keys, values and queries,<br/>
|
140 |
+
we found it beneficial to linearly project the queries, keys and values h times with different, learned<br/>
|
141 |
+
linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of<br/>queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional<br/>output values. These are concatenated and once again projected, resulting in the final values, as<br/>depicted in Figure <a href="">2.</a><br/>
|
142 |
+
Multi-head attention allows the model to jointly attend to information from different representation<br/>subspaces at different positions. With a single attention head, averaging inhibits this.<br/>
|
143 |
+
4To illustrate why the dot products get large, assume that the components of q and k are independent random<br/>
|
144 |
+
Pd<br/>
|
145 |
+
variables with mean 0 and variance 1. Then their dot product, q<br/>
|
146 |
+
k<br/>
|
147 |
+
· k =<br/>
|
148 |
+
q<br/>
|
149 |
+
i=1 iki, has mean 0 and variance dk .<br/>
|
150 |
+
4<br/>
|
151 |
+
<hr/>
|
152 |
+
<a name=5></a>MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O<br/>
|
153 |
+
where headi = Attention(QW Q, KW K<br/>
|
154 |
+
i<br/>
|
155 |
+
i , V W V<br/>
|
156 |
+
i )<br/>
|
157 |
+
Where the projections are parameter matrices W Q<br/>
|
158 |
+
i<br/>
|
159 |
+
∈ Rdmodel×dk , W K<br/>
|
160 |
+
i<br/>
|
161 |
+
∈ Rdmodel×dk , W V<br/>
|
162 |
+
i<br/>
|
163 |
+
∈ Rdmodel×dv<br/>
|
164 |
+
and W O ∈ Rhdv×dmodel.<br/>
|
165 |
+
In this work we employ h = 8 parallel attention layers, or heads. For each of these we use<br/>dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost<br/>is similar to that of single-head attention with full dimensionality.<br/>
|
166 |
+
3.2.3<br/>
|
167 |
+
Applications of Attention in our Model<br/>
|
168 |
+
The Transformer uses multi-head attention in three different ways:<br/>
|
169 |
+
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer,<br/>
|
170 |
+
and the memory keys and values come from the output of the encoder. This allows every<br/>position in the decoder to attend over all positions in the input sequence. This mimics the<br/>typical encoder-decoder attention mechanisms in sequence-to-sequence models such as<br/><a href="">[31, 2, 8].</a><br/>
|
171 |
+
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values<br/>
|
172 |
+
and queries come from the same place, in this case, the output of the previous layer in the<br/>encoder. Each position in the encoder can attend to all positions in the previous layer of the<br/>encoder.<br/>
|
173 |
+
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to<br/>
|
174 |
+
all positions in the decoder up to and including that position. We need to prevent leftward<br/>information flow in the decoder to preserve the auto-regressive property. We implement this<br/>inside of scaled dot-product attention by masking out (setting to −∞) all values in the input<br/>of the softmax which correspond to illegal connections. See Figure <a href="">2.</a><br/>
|
175 |
+
3.3<br/>
|
176 |
+
Position-wise Feed-Forward Networks<br/>
|
177 |
+
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully<br/>connected feed-forward network, which is applied to each position separately and identically. This<br/>consists of two linear transformations with a ReLU activation in between.<br/>
|
178 |
+
FFN(x) = max(0, xW1 + b1)W2 + b2<br/>
|
179 |
+
(2)<br/>
|
180 |
+
While the linear transformations are the same across different positions, they use different parameters<br/>
|
181 |
+
from layer to layer. Another way of describing this is as two convolutions with kernel size 1.<br/>
|
182 |
+
The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality<br/>
|
183 |
+
dff = 2048.<br/>
|
184 |
+
3.4<br/>
|
185 |
+
Embeddings and Softmax<br/>
|
186 |
+
Similarly to other sequence transduction models, we use learned embeddings to convert the input<br/>tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor-<br/>mation and softmax function to convert the decoder output to predicted next-token probabilities. In<br/>our model, we share the same weight matrix between the two embedding layers and the pre-softmax<br/>
|
187 |
+
√<br/>
|
188 |
+
linear transformation, similar to <a href="">[24]. </a>In the embedding layers, we multiply those weights by<br/>
|
189 |
+
dmodel.<br/>
|
190 |
+
3.5<br/>
|
191 |
+
Positional Encoding<br/>
|
192 |
+
Since our model contains no recurrence and no convolution, in order for the model to make use of the<br/>order of the sequence, we must inject some information about the relative or absolute position of the<br/>tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the<br/>
|
193 |
+
5<br/>
|
194 |
+
<hr/>
|
195 |
+
<a name=6></a>Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations<br/>
|
196 |
+
for different layer types. n is the sequence length, d is the representation dimension, k is the kernel<br/>size of convolutions and r the size of the neighborhood in restricted self-attention.<br/>
|
197 |
+
Layer Type<br/>
|
198 |
+
Complexity per Layer<br/>
|
199 |
+
Sequential<br/>
|
200 |
+
Maximum Path Length<br/>
|
201 |
+
Operations<br/>
|
202 |
+
Self-Attention<br/>
|
203 |
+
O(n2 · d)<br/>
|
204 |
+
O(1)<br/>
|
205 |
+
O(1)<br/>
|
206 |
+
Recurrent<br/>
|
207 |
+
O(n · d2)<br/>
|
208 |
+
O(n)<br/>
|
209 |
+
O(n)<br/>
|
210 |
+
Convolutional<br/>
|
211 |
+
O(k · n · d2)<br/>
|
212 |
+
O(1)<br/>
|
213 |
+
O(logk(n))<br/>
|
214 |
+
Self-Attention (restricted)<br/>
|
215 |
+
O(r · n · d)<br/>
|
216 |
+
O(1)<br/>
|
217 |
+
O(n/r)<br/>
|
218 |
+
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel<br/>as the embeddings, so that the two can be summed. There are many choices of positional encodings,<br/>learned and fixed <a href="">[8].</a><br/>
|
219 |
+
In this work, we use sine and cosine functions of different frequencies:<br/>
|
220 |
+
P E(pos,2i) = sin(pos/100002i/dmodel)<br/>
|
221 |
+
P E(pos,2i+1) = cos(pos/100002i/dmodel)<br/>
|
222 |
+
where pos is the position and i is the dimension. That is, each dimension of the positional encoding<br/>
|
223 |
+
corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We<br/>chose this function because we hypothesized it would allow the model to easily learn to attend by<br/>relative positions, since for any fixed offset k, P Epos+k can be represented as a linear function of<br/>P Epos.<br/>
|
224 |
+
We also experimented with using learned positional embeddings <a href="">[8] </a>instead, and found that the two<br/>versions produced nearly identical results (see Table <a href="">3 </a>row (E)). We chose the sinusoidal version<br/>
|
225 |
+
because it may allow the model to extrapolate to sequence lengths longer than the ones encountered<br/>during training.<br/>
|
226 |
+
4<br/>
|
227 |
+
Why Self-Attention<br/>
|
228 |
+
In this section we compare various aspects of self-attention layers to the recurrent and convolu-<br/>tional layers commonly used for mapping one variable-length sequence of symbol representations<br/>
|
229 |
+
(x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi ∈ Rd, such as a hidden<br/>
|
230 |
+
layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we<br/>consider three desiderata.<br/>
|
231 |
+
One is the total computational complexity per layer. Another is the amount of computation that can<br/>be parallelized, as measured by the minimum number of sequential operations required.<br/>
|
232 |
+
The third is the path length between long-range dependencies in the network. Learning long-range<br/>
|
233 |
+
dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the<br/>ability to learn such dependencies is the length of the paths forward and backward signals have to<br/>traverse in the network. The shorter these paths between any combination of positions in the input<br/>and output sequences, the easier it is to learn long-range dependencies <a href="">[11]. </a>Hence we also compare<br/>the maximum path length between any two input and output positions in networks composed of the<br/>different layer types.<br/>
|
234 |
+
As noted in Table <a href="">1, </a>a self-attention layer connects all positions with a constant number of sequentially<br/>
|
235 |
+
executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of<br/>computational complexity, self-attention layers are faster than recurrent layers when the sequence<br/>length n is smaller than the representation dimensionality d, which is most often the case with<br/>sentence representations used by state-of-the-art models in machine translations, such as word-piece<br/><a href="">[31] </a>and byte-pair <a href="">[25] </a>representations. To improve computational performance for tasks involving<br/>very long sequences, self-attention could be restricted to considering only a neighborhood of size r in<br/>
|
236 |
+
6<br/>
|
237 |
+
<hr/>
|
238 |
+
<a name=7></a>the input sequence centered around the respective output position. This would increase the maximum<br/>path length to O(n/r). We plan to investigate this approach further in future work.<br/>
|
239 |
+
A single convolutional layer with kernel width k < n does not connect all pairs of input and output<br/>
|
240 |
+
positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels,<br/>or O(logk(n)) in the case of dilated convolutions <a href="">[15], </a>increasing the length of the longest paths<br/>between any two positions in the network. Convolutional layers are generally more expensive than<br/>recurrent layers, by a factor of k. Separable convolutions <a href="">[6], </a>however, decrease the complexity<br/>considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable<br/>convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer,<br/>the approach we take in our model.<br/>
|
241 |
+
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions<br/>
|
242 |
+
from our models and present and discuss examples in the appendix. Not only do individual attention<br/>heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic<br/>and semantic structure of the sentences.<br/>
|
243 |
+
5<br/>
|
244 |
+
Training<br/>
|
245 |
+
This section describes the training regime for our models.<br/>
|
246 |
+
5.1<br/>
|
247 |
+
Training Data and Batching<br/>
|
248 |
+
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million<br/>
|
249 |
+
sentence pairs. Sentences were encoded using byte-pair encoding <a href="">[3], </a>which has a shared source-<br/>target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT<br/>2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece<br/>vocabulary <a href="">[31]. </a>Sentence pairs were batched together by approximate sequence length. Each training<br/>batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000<br/>target tokens.<br/>
|
250 |
+
5.2<br/>
|
251 |
+
Hardware and Schedule<br/>
|
252 |
+
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using<br/>
|
253 |
+
the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We<br/>trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the<br/>bottom line of table <a href="">3), </a>step time was 1.0 seconds. The big models were trained for 300,000 steps<br/>
|
254 |
+
(3.5 days).<br/>
|
255 |
+
5.3<br/>
|
256 |
+
Optimizer<br/>
|
257 |
+
We used the Adam optimizer <a href="">[17] </a>with β1 = 0.9, β2 = 0.98 and ǫ = 10−9. We varied the learning<br/>
|
258 |
+
rate over the course of training, according to the formula:<br/>
|
259 |
+
lrate = d−0.5<br/>
|
260 |
+
model · min(step_num−0.5, step_num · warmup_steps−1.5)<br/>
|
261 |
+
(3)<br/>
|
262 |
+
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps,<br/>
|
263 |
+
and decreasing it thereafter proportionally to the inverse square root of the step number. We used<br/>warmup_steps = 4000.<br/>
|
264 |
+
5.4<br/>
|
265 |
+
Regularization<br/>
|
266 |
+
We employ three types of regularization during training:<br/>
|
267 |
+
Residual Dropout<br/>
|
268 |
+
We apply dropout <a href="">[27] </a>to the output of each sub-layer, before it is added to the<br/>
|
269 |
+
sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the<br/>positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of<br/>Pdrop = 0.1.<br/>
|
270 |
+
7<br/>
|
271 |
+
<hr/>
|
272 |
+
<a name=8></a>Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the<br/>
|
273 |
+
English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.<br/>
|
274 |
+
BLEU<br/>
|
275 |
+
Training Cost (FLOPs)<br/>
|
276 |
+
Model<br/>
|
277 |
+
EN-DE<br/>
|
278 |
+
EN-FR<br/>
|
279 |
+
EN-DE<br/>
|
280 |
+
EN-FR<br/>
|
281 |
+
ByteNet <a href="">[15]</a><br/>
|
282 |
+
23.75<br/>
|
283 |
+
Deep-Att + PosUnk <a href="">[32]</a><br/>
|
284 |
+
39.2<br/>
|
285 |
+
1.0 · 1020<br/>
|
286 |
+
GNMT + RL <a href="">[31]</a><br/>
|
287 |
+
24.6<br/>
|
288 |
+
39.92<br/>
|
289 |
+
2.3 · 1019<br/>
|
290 |
+
1.4 · 1020<br/>
|
291 |
+
ConvS2S <a href="">[8]</a><br/>
|
292 |
+
25.16<br/>
|
293 |
+
40.46<br/>
|
294 |
+
9.6 · 1018<br/>
|
295 |
+
1.5 · 1020<br/>
|
296 |
+
MoE <a href="">[26]</a><br/>
|
297 |
+
26.03<br/>
|
298 |
+
40.56<br/>
|
299 |
+
2.0 · 1019<br/>
|
300 |
+
1.2 · 1020<br/>
|
301 |
+
Deep-Att + PosUnk Ensemble <a href="">[32]</a><br/>
|
302 |
+
40.4<br/>
|
303 |
+
8.0 · 1020<br/>
|
304 |
+
GNMT + RL Ensemble <a href="">[31]</a><br/>
|
305 |
+
26.30<br/>
|
306 |
+
41.16<br/>
|
307 |
+
1.8 · 1020<br/>
|
308 |
+
1.1 · 1021<br/>
|
309 |
+
ConvS2S Ensemble <a href="">[8]</a><br/>
|
310 |
+
26.36<br/>
|
311 |
+
41.29<br/>
|
312 |
+
7.7 · 1019<br/>
|
313 |
+
1.2 · 1021<br/>
|
314 |
+
Transformer (base model)<br/>
|
315 |
+
27.3<br/>
|
316 |
+
38.1<br/>
|
317 |
+
3 3<br/>
|
318 |
+
.<br/>
|
319 |
+
· 1018<br/>
|
320 |
+
Transformer (big)<br/>
|
321 |
+
28.4<br/>
|
322 |
+
41.0<br/>
|
323 |
+
2.3 · 1019<br/>
|
324 |
+
Label Smoothing<br/>
|
325 |
+
During training, we employed label smoothing of value ǫls = 0.1 <a href="">[30]. </a>This<br/>
|
326 |
+
hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.<br/>
|
327 |
+
6<br/>
|
328 |
+
Results<br/>
|
329 |
+
6.1<br/>
|
330 |
+
Machine Translation<br/>
|
331 |
+
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big)<br/>in Table <a href="">2) </a>outperforms the best previously reported models (including ensembles) by more than 2.0<br/>BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is<br/>listed in the bottom line of Table <a href="">3. </a>Training took 3.5 days on 8 P100 GPUs. Even our base model<br/>surpasses all previously published models and ensembles, at a fraction of the training cost of any of<br/>the competitive models.<br/>
|
332 |
+
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0,<br/>outperforming all of the previously published single models, at less than 1/4 the training cost of the<br/>previous state-of-the-art model. The Transformer (big) model trained for English-to-French used<br/>dropout rate Pdrop = 0.1, instead of 0.3.<br/>
|
333 |
+
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which<br/>
|
334 |
+
were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We<br/>
|
335 |
+
used beam search with a beam size of 4 and length penalty α = 0.6 <a href="">[31]. </a>These hyperparameters<br/>
|
336 |
+
were chosen after experimentation on the development set. We set the maximum output length during<br/>
|
337 |
+
inference to input length + 50, but terminate early when possible <a href="">[31].</a><br/>
|
338 |
+
Table <a href="">2 </a>summarizes our results and compares our translation quality and training costs to other model<br/>
|
339 |
+
architectures from the literature. We estimate the number of floating point operations used to train a<br/>model by multiplying the training time, the number of GPUs used, and an estimate of the sustained<br/>single-precision floating-point capacity of each GPU <a href="">5.</a><br/>
|
340 |
+
6.2<br/>
|
341 |
+
Model Variations<br/>
|
342 |
+
To evaluate the importance of different components of the Transformer, we varied our base model<br/>
|
343 |
+
in different ways, measuring the change in performance on English-to-German translation on the<br/>development set, newstest2013. We used beam search as described in the previous section, but no<br/>checkpoint averaging. We present these results in Table <a href="">3.</a><br/>
|
344 |
+
In Table <a href="">3 </a>rows (A), we vary the number of attention heads and the attention key and value dimensions,<br/>keeping the amount of computation constant, as described in Section <a href="">3.2.2. </a>While single-head<br/>attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.<br/>
|
345 |
+
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.<br/>
|
346 |
+
8<br/>
|
347 |
+
<hr/>
|
348 |
+
<a name=9></a>Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base<br/>
|
349 |
+
model. All metrics are on the English-to-German translation development set, newstest2013. Listed<br/>perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to<br/>per-word perplexities.<br/>
|
350 |
+
train<br/>
|
351 |
+
PPL<br/>
|
352 |
+
BLEU<br/>
|
353 |
+
params<br/>
|
354 |
+
N<br/>
|
355 |
+
dmodel<br/>
|
356 |
+
dff<br/>
|
357 |
+
h<br/>
|
358 |
+
dk<br/>
|
359 |
+
dv<br/>
|
360 |
+
Pdrop<br/>
|
361 |
+
ǫls<br/>
|
362 |
+
steps<br/>
|
363 |
+
(dev)<br/>
|
364 |
+
(dev)<br/>
|
365 |
+
×106<br/>
|
366 |
+
base<br/>
|
367 |
+
6<br/>
|
368 |
+
512<br/>
|
369 |
+
2048<br/>
|
370 |
+
8<br/>
|
371 |
+
64<br/>
|
372 |
+
64<br/>
|
373 |
+
0.1<br/>
|
374 |
+
0.1<br/>
|
375 |
+
100K<br/>
|
376 |
+
4.92<br/>
|
377 |
+
25.8<br/>
|
378 |
+
65<br/>
|
379 |
+
1<br/>
|
380 |
+
512<br/>
|
381 |
+
512<br/>
|
382 |
+
5.29<br/>
|
383 |
+
24.9<br/>
|
384 |
+
4<br/>
|
385 |
+
128<br/>
|
386 |
+
128<br/>
|
387 |
+
5.00<br/>
|
388 |
+
25.5<br/>
|
389 |
+
(A)<br/>
|
390 |
+
16<br/>
|
391 |
+
32<br/>
|
392 |
+
32<br/>
|
393 |
+
4.91<br/>
|
394 |
+
25.8<br/>
|
395 |
+
32<br/>
|
396 |
+
16<br/>
|
397 |
+
16<br/>
|
398 |
+
5.01<br/>
|
399 |
+
25.4<br/>
|
400 |
+
16<br/>
|
401 |
+
5.16<br/>
|
402 |
+
25.1<br/>
|
403 |
+
58<br/>
|
404 |
+
(B)<br/>
|
405 |
+
32<br/>
|
406 |
+
5.01<br/>
|
407 |
+
25.4<br/>
|
408 |
+
60<br/>
|
409 |
+
2<br/>
|
410 |
+
6.11<br/>
|
411 |
+
23.7<br/>
|
412 |
+
36<br/>
|
413 |
+
4<br/>
|
414 |
+
5.19<br/>
|
415 |
+
25.3<br/>
|
416 |
+
50<br/>
|
417 |
+
8<br/>
|
418 |
+
4.88<br/>
|
419 |
+
25.5<br/>
|
420 |
+
80<br/>
|
421 |
+
(C)<br/>
|
422 |
+
256<br/>
|
423 |
+
32<br/>
|
424 |
+
32<br/>
|
425 |
+
5.75<br/>
|
426 |
+
24.5<br/>
|
427 |
+
28<br/>
|
428 |
+
1024<br/>
|
429 |
+
128<br/>
|
430 |
+
128<br/>
|
431 |
+
4.66<br/>
|
432 |
+
26.0<br/>
|
433 |
+
168<br/>
|
434 |
+
1024<br/>
|
435 |
+
5.12<br/>
|
436 |
+
25.4<br/>
|
437 |
+
53<br/>
|
438 |
+
4096<br/>
|
439 |
+
4.75<br/>
|
440 |
+
26.2<br/>
|
441 |
+
90<br/>
|
442 |
+
0.0<br/>
|
443 |
+
5.77<br/>
|
444 |
+
24.6<br/>
|
445 |
+
0.2<br/>
|
446 |
+
4.95<br/>
|
447 |
+
25.5<br/>
|
448 |
+
(D)<br/>
|
449 |
+
0.0<br/>
|
450 |
+
4.67<br/>
|
451 |
+
25.3<br/>
|
452 |
+
0.2<br/>
|
453 |
+
5.47<br/>
|
454 |
+
25.7<br/>
|
455 |
+
(E)<br/>
|
456 |
+
positional embedding instead of sinusoids<br/>
|
457 |
+
4.92<br/>
|
458 |
+
25.7<br/>
|
459 |
+
big<br/>
|
460 |
+
6<br/>
|
461 |
+
1024<br/>
|
462 |
+
4096<br/>
|
463 |
+
16<br/>
|
464 |
+
0.3<br/>
|
465 |
+
300K<br/>
|
466 |
+
4.33<br/>
|
467 |
+
26.4<br/>
|
468 |
+
213<br/>
|
469 |
+
In Table <a href="">3 </a>rows (B), we observe that reducing the attention key size dk hurts model quality. This<br/>suggests that determining compatibility is not easy and that a more sophisticated compatibility<br/>function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected,<br/>bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our<br/>sinusoidal positional encoding with learned positional embeddings <a href="">[8], </a>and observe nearly identical<br/>results to the base model.<br/>
|
470 |
+
7<br/>
|
471 |
+
Conclusion<br/>
|
472 |
+
In this work, we presented the Transformer, the first sequence transduction model based entirely on<br/>attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with<br/>multi-headed self-attention.<br/>
|
473 |
+
For translation tasks, the Transformer can be trained significantly faster than architectures based<br/>on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014<br/>English-to-French translation tasks, we achieve a new state of the art. In the former task our best<br/>model outperforms even all previously reported ensembles.<br/>
|
474 |
+
We are excited about the future of attention-based models and plan to apply them to other tasks. We<br/>
|
475 |
+
plan to extend the Transformer to problems involving input and output modalities other than text and<br/>to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs<br/>such as images, audio and video. Making generation less sequential is another research goals of ours.<br/>
|
476 |
+
The code we used to train and evaluate our models is available at <a href="https://github.com/tensorflow/tensor2tensor">https://github.com/</a><br/>
|
477 |
+
<a href="https://github.com/tensorflow/tensor2tensor">tensorflow/tensor2tensor.</a><br/>
|
478 |
+
Acknowledgements<br/>
|
479 |
+
We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful<br/>
|
480 |
+
comments, corrections and inspiration.<br/>
|
481 |
+
9<br/>
|
482 |
+
<hr/>
|
483 |
+
<a name=10></a>References<br/>
|
484 |
+
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. <i>arXiv preprint</i><br/>
|
485 |
+
<i>arXiv:1607.06450</i>, 2016.<br/>
|
486 |
+
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly<br/>
|
487 |
+
learning to align and translate. <i>CoRR</i>, abs/1409.0473, 2014.<br/>
|
488 |
+
[3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural<br/>
|
489 |
+
machine translation architectures. <i>CoRR</i>, abs/1703.03906, 2017.<br/>
|
490 |
+
[4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine<br/>
|
491 |
+
reading. <i>arXiv preprint arXiv:1601.06733</i>, 2016.<br/>
|
492 |
+
[5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,<br/>
|
493 |
+
and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical<br/>machine translation. <i>CoRR</i>, abs/1406.1078, 2014.<br/>
|
494 |
+
[6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. <i>arXiv</i><br/>
|
495 |
+
<i>preprint arXiv:1610.02357</i>, 2016.<br/>
|
496 |
+
[7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation<br/>
|
497 |
+
of gated recurrent neural networks on sequence modeling. <i>CoRR</i>, abs/1412.3555, 2014.<br/>
|
498 |
+
[8] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu-<br/>
|
499 |
+
tional sequence to sequence learning. <i>arXiv preprint arXiv:1705.03122v2</i>, 2017.<br/>
|
500 |
+
[9] Alex Graves.<br/>
|
501 |
+
Generating sequences with recurrent neural networks.<br/>
|
502 |
+
<i>arXiv preprint</i><br/>
|
503 |
+
<i>arXiv:1308.0850</i>, 2013.<br/>
|
504 |
+
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im-<br/>
|
505 |
+
age recognition. In <i>Proceedings of the IEEE Conference on Computer Vision and Pattern</i><br/>
|
506 |
+
<i>Recognition</i>, pages 770–778, 2016.<br/>
|
507 |
+
[11] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in<br/>
|
508 |
+
recurrent nets: the difficulty of learning long-term dependencies, 2001.<br/>
|
509 |
+
[12] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. <i>Neural computation</i>,<br/>
|
510 |
+
9(8):1735–1780, 1997.<br/>
|
511 |
+
[13] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring<br/>
|
512 |
+
the limits of language modeling. <i>arXiv preprint arXiv:1602.02410</i>, 2016.<br/>
|
513 |
+
[14] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In <i>International Conference</i><br/>
|
514 |
+
<i>on Learning Representations (ICLR)</i>, 2016.<br/>
|
515 |
+
[15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko-<br/>
|
516 |
+
ray Kavukcuoglu. Neural machine translation in linear time. <i>arXiv preprint arXiv:1610.10099v2</i>,<br/>2017.<br/>
|
517 |
+
[16] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks.<br/>
|
518 |
+
In <i>International Conference on Learning Representations</i>, 2017.<br/>
|
519 |
+
[17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In <i>ICLR</i>, 2015.<br/>
|
520 |
+
[18] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. <i>arXiv preprint</i><br/>
|
521 |
+
<i>arXiv:1703.10722</i>, 2017.<br/>
|
522 |
+
[19] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen<br/>
|
523 |
+
Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. <i>arXiv preprint<br/>arXiv:1703.03130</i>, 2017.<br/>
|
524 |
+
[20] Samy Bengio Łukasz Kaiser. Can active memory replace attention? In <i>Advances in Neural</i><br/>
|
525 |
+
<i>Information Processing Systems, (NIPS)</i>, 2016.<br/>
|
526 |
+
10<br/>
|
527 |
+
<hr/>
|
528 |
+
<a name=11></a>[21] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-<br/>
|
529 |
+
based neural machine translation. <i>arXiv preprint arXiv:1508.04025</i>, 2015.<br/>
|
530 |
+
[22] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention<br/>
|
531 |
+
model. In <i>Empirical Methods in Natural Language Processing</i>, 2016.<br/>
|
532 |
+
[23] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive<br/>
|
533 |
+
summarization. <i>arXiv preprint arXiv:1705.04304</i>, 2017.<br/>
|
534 |
+
[24] Ofir Press and Lior Wolf. Using the output embedding to improve language models. <i>arXiv</i><br/>
|
535 |
+
<i>preprint arXiv:1608.05859</i>, 2016.<br/>
|
536 |
+
[25] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words<br/>
|
537 |
+
with subword units. <i>arXiv preprint arXiv:1508.07909</i>, 2015.<br/>
|
538 |
+
[26] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,<br/>
|
539 |
+
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts<br/>layer. <i>arXiv preprint arXiv:1701.06538</i>, 2017.<br/>
|
540 |
+
[27] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-<br/>
|
541 |
+
nov. Dropout: a simple way to prevent neural networks from overfitting. <i>Journal of Machine</i><br/>
|
542 |
+
<i>Learning Research</i>, 15(1):1929–1958, 2014.<br/>
|
543 |
+
[28] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory<br/>
|
544 |
+
networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,<br/>
|
545 |
+
<i>Advances in Neural Information Processing Systems 28</i>, pages 2440–2448. Curran Associates,<br/>
|
546 |
+
Inc., 2015.<br/>
|
547 |
+
[29] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural<br/>
|
548 |
+
networks. In <i>Advances in Neural Information Processing Systems</i>, pages 3104–3112, 2014.<br/>
|
549 |
+
[30] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.<br/>
|
550 |
+
Rethinking the inception architecture for computer vision. <i>CoRR</i>, abs/1512.00567, 2015.<br/>
|
551 |
+
[31] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang<br/>
|
552 |
+
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine<br/>translation system: Bridging the gap between human and machine translation. <i>arXiv preprint<br/>arXiv:1609.08144</i>, 2016.<br/>
|
553 |
+
[32] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with<br/>
|
554 |
+
fast-forward connections for neural machine translation. <i>CoRR</i>, abs/1606.04199, 2016.<br/>
|
555 |
+
11<br/>
|
556 |
+
<hr/>
|
557 |
+
</body>
|
558 |
+
</html>
|