file_name
stringlengths
7
127
text
stringlengths
1.27k
557k
2203.14263.pdf
1 A General Survey on Attention Mechanisms in Deep Learning Gianni Brauwers and Flavius Frasincar Abstract Attention is an important mechanism that can be employed for a variety of deep learning models across many different domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed, and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in the field of attention models is considered. Index Terms Attention models, deep learning, introductory and survey, neural nets, supervised learning ! 1 I NTRODUCTION THEidea of mimicking human attention first arose in the field of computer vision , in an attempt to reduce the computational complexity of image processing while improving performance by introducing a model that would only focus on specific regions of images instead of the entire picture. Although, the true starting point of the attention mechanisms we know today is often attributed to originate in the field of natural language processing . Bahdanau et al. implement attention in a machine translation model to address certain issues with the structure of recurrent neural networks. After Bahdanau et al. emphasized the advantages of attention, the attention techniques were refined and quickly became popular for a variety of tasks, such as text classification , , image captioning , , sentiment analysis , , and speech recognition , , . Attention has become a popular technique in deep learning for several reasons. Firstly, models that incorporate attention mechanisms attain state-of-the-art results for all of the previously mentioned tasks, and many others. Furthermore, most attention mechanisms can be trained jointly with a base model, such as a recurrent neural network or a convolutional neural network using regular backpropagation . Additionally, attention introduces a certain type of interpretation into neural network models that are generally known to be highly complicated to interpret. Moreover, the popularity of attention mechanisms was additionally boosted after the introduction of the Transformer model that further proved how effective attention can be. Attention was originally introduced as an extension to recurrent neural networks . However, the Transformer model proposed in poses a major development in attention research as it demonstrates that the attention mechanism is sufficient to build a state-of-the-art model. This means that disadvantages, such as the fact that recurrent neural networks are particularly difficult to parallelize, can G. Brauwers and F. Frasincar are with the Erasmus School of Economics, Erasmus University Rotterdam, 3000 DR, Rotterdam, the Netherlands (email:{frasincar, brauwers}@ese.eur.nl). Manuscript received July 6, 2020; revised June 21, 2021; Corresponding author: F. Frasincarbe circumvented. As was the case for the introduction of the original attention mechanism , the Transformer model was created for machine translation, but was quickly adopted to be used for other tasks, such as image processing , video processing , and recommender systems . The purpose of this survey is to explain the general form of attention, and provide a comprehensive overview of attention techniques in deep learning. Other surveys have already been published on the subject of attention models. For example, in , a survey is presented on attention in computer vision, provides an overview of attention in graph models, and , , are all surveys on attention in natural language processing. This paper partly builds on the information presented in the previously mentioned surveys. Yet, we provide our own significant contributions. The main difference between this survey and the previously mentioned ones is that the other surveys generally focus on attention models within a certain domain. This survey, however, provides a cross-domain overview of attention techniques. We discuss the attention techniques in a general way, allowing them to be understood and applied in a variety of domains. Furthermore, we found the taxonomies presented in previous surveys to be lacking the depth and structure needed to properly distinguish the various attention mechanisms. Additionally, certain significant attention techniques have not yet been properly discussed in previous surveys, while other presented attention mechanisms seem to be lacking either technical details or intuitive explanations. Therefore, in this paper, we present important attention techniques by means of a single framework using a uniform notation, a combination of both technical and intuitive explanations for each presented attention technique, and a comprehensive taxonomy of attention mechanisms. The structure of this paper is as follows. Section 2 introduces a general attention model that provides the reader with a basic understanding of the properties of attention and how it can be applied. One of the main contributions of this paper is the taxonomy of attention techniques presented in Section 3. In this section, attention mechanisms are explained and categorized according to the presentedarXiv:2203.14263v1 [cs.LG] 27 Mar 2022 2 Feature Model Query Model Attention Model Output Model Fig. 1. An illustration of the general structure of the task model. taxonomy. Section 4 provides an overview of performance measures and methods for evaluating attention models. Furthermore, the taxonomy is used to evaluate the structure of various attention models. Lastly, in Section 5, we give our conclusions and suggestions for further research. 2 G ENERAL ATTENTION MODEL This section presents a general form of attention with corresponding notation. The notation introduced here is based on the notation that was introduced in and popularized in . The framework presented in this section is used throughout the rest of this paper. To implement a general attention model, it is necessary to first describe the general characteristics of a model that can employ attention. First of all, we will refer to the complete model as the task model , of which the structure is presented in Fig. 1. This model simply takes an input, carries out the specified task, and produces the desired output. For example, the task model can be a language model that takes as input a piece of text, and produces as output a summary of the contents, a classification of the sentiment, or the text translated word for word to another language. Alternatively, the task model can take an image, and produce a caption or segmentation for that image. The task model consists of four submodels: the feature model, the query model, the attention model, and the output model. In Subsection 2.1, the feature model and query model are discussed, which are used to prepare the input for the attention calculation. In Subsection 2.2, the attention model and output model are discussed, which are concerned with producing the output. 2.1 Attention Input Suppose the task model takes as input the matrix X Rdxnx, wheredxrepresents the size of the input vectors andnxrepresents the amount of input vectors. The columns in this matrix can represent the words in a sentence, the pixels in an image, the characteristics of an acoustic sequence, or any other collection of inputs. The feature model is then employed to extract the nffeature vectors f1,...,fnfRdffromX, wheredfrepresents the size of the feature vectors. The feature model can be a recurrent neural network (RNN), a convolutional neural network (CNN), a simple embedding layer, a linear transformation of the original data, or no transformation at all. Essentially, the feature model consists of all the steps that transform the original input Xinto the feature vectors f1,...,fnfthat the attention model will attend to. Attention AlignmentAttention ScoresW eighted A verageKeys V aluesFig. 2. The inner mechanisms of the general attention module. To determine which vectors to attend to, the attention model requires the query qRdq, wheredqindicates the size of the query vector. This query is extracted by the query model , and is generally designed based on the type of output that is desired of the model. A query tells the attention model which feature vectors to attend to. It can be interpreted literally as a query, or a question. For example, for the task of image captioning, suppose that one uses a decoder RNN model to produce the output caption based on feature vectors obtained from the image by a CNN. At each prediction step, the hidden state of the RNN model can be used as a query to attend to the CNN feature vectors. In each step, the query is a question in the sense that it asks for the necessary information from the feature vectors based on the current prediction context. 2.2 Attention Output The feature vectors and query are used as input for the attention model . This model consists of a single, or a collection of general attention modules . An overview of a general attention module is presented in Fig. 2. The input of the general attention module is the query qRdq, and the matrix of feature vectors F= [f1,...,fnf]Rdfnf. Two separate matrices are extracted from the matrix F: the keys matrix K= [k1,...,knf]Rdknf, and the values matrix V= [v1,...,vnf]Rdvnf, wheredkanddvindicate, respectively, the dimensions of the key vectors (columns ofK) and value vectors (columns of V). The general way of obtaining these matrices is through a linear transformation of Fusing the weight matrices WKRdkdfand WVRdvdf, forKandV, respectively. The calculations ofKandVare presented in (1). Both weight matrices can be learned during training or predefined by the researcher. For example, one can choose to define both WKandWV as equal to the identity matrix to retain the original feature vectors. Other ways of defining the keys and the values are also possible, such as using completely separate inputs for the keys and values. The only constraint to be obeyed is that the number of columns in KandVremains the same. K dknf=WK dkdfF dfnf, V dvnf=WV dvdfF dfnf. (1) The goal of the attention module is to produce a weighted average of the value vectors in V. The weights used to produce this output are obtained via an attention scoring and alignment step. The query qand the keys matrix Kare used to calculate the vector of attention scores 3 e= [e1,...,e nf]Rnf. This is done via the score function score (), as illustrated in (2). el 11=score (q dq1,kl dk1). (2) As discussed before, the query symbolizes a request for information. The attention score elrepresents how important the information contained in the key vector klis according to the query. If the dimensions of the query and key vectors are the same, an example of a score function would be to take the dot-product of the vectors. The different types of score functions are further discussed in Section 3.2.1. Next, the attention scores are processed further through an alignment layer. The attention scores can generally have a wide range outside of [0,1]. However, since the goal is to produce a weighted average, the scores are redistributed via an alignment function align ()as defined in (3). al 11=align (el 11;e nf1), (3) wherealR1is the attention weight corresponding to thelth value vector. One example of an alignment function would be to use a softmax function, but the various other alignment types are discussed in Section 3.2.2. The attention weights provide a rather intuitive interpretation for the attention module. Each weight is a direct indication of how important each feature vector is relative to the others for this particular problem. This can provide us with a more in-depth understanding of the model behaviour, and the relations between inputs and outputs. The vector of attention weights a= [a1,...,a nf]Rnfis used to produce the context vector cRdvby calculating a weighted average of the columns of the values matrix V, as shown in (4). c dv1=nf l=1al 11vl dv1. (4) As illustrated in Fig. 1, the context vector is then used in theoutput model to create the output y. This output model translates the context vector into an output prediction. For example, it could be a simple softmax layer that takes as input the context vector c, as shown in (5). y dy1=softmax (Wc dydvc dv1+bc dy1), (5) wheredyis the number of output choices or classes, and WcRdydvandbcRdyare trainable weights. 2.3 Attention Applications Attention is a rather general mechanism that can be used in a wide variety of problem domains. Consider the task of machine translation using an RNN model. Also, consider the problem of image classification using a basic CNN model. While an RNN produces a sequence of hidden state vectors, a CNN creates feature maps, where each region in the image is represented by a feature vector. The RNN hidden states are organized sequentially, while the CNN feature maps are organized spatially. Yet, attention can still be applied in both situations, since the attention mechanism does not inherently depend on the organization of the feature vectors. This characteristic makes attention easy to implement in a wide variety of models in different domains.Another domain where attention can be applied is audio processing , . Acoustic sequences can be represented by a sequence of feature vectors that relate to certain time periods of the audio sample. These vectors could simply be the raw input audio, or they can be extracted via, for example, an RNN or CNN. Video processing is another domain where attention can be applied intuitively , . Video data consists of sequences of images, so attention can be applied to the individual images, as well as the entire sequence. Recommender systems often incorporate a users interaction history to produce recommendations. Feature vectors can be extracted based on, for example, the ids or other characteristics of the products the user interacted with, and attention can be applied to them . Attention can generally also be applied to many problems that use a time series as input, be it medical , financial , or anything else, as long as feature vectors can be extracted. The fact that attention does not rely on the organization of the feature vectors allows it to be applied to various problems that each use data with different structures, as illustrated by the previous domain examples. Yet, this can be taken even further by applying attention to data where there is irregular structure. For example, protein structures, city traffic flows, and communication networks cannot always be represented using neatly structured organizations, such as sequences, like time series, or grids, like images. In such cases, the different aspects of the data are often represented as nodes in a graph. These nodes can be represented by feature vectors, meaning that attention can be applied in domains that use graph-structured data as well , . In general, attention can be applied to any problem for which a set of feature vectors can be defined or extracted. As such, the general attention model presented in Fig. 2 is applicable to a wide range of domains. The problem, however, is that there is a large variety of different applications and extensions of the general attention module. As such, in Section 3, a comprehensive overview is provided of a collection of different attention mechanisms. 3 A TTENTION TAXONOMY There are many different types of attention mechanisms and extensions, and a model can use different combinations of these attention techniques. As such, we propose a taxonomy that can be used to classify different types of attention mechanisms. Fig. 3 provides a visual overview of the different categories and subcategories that the attention mechanisms can be organized in. The three major categories are based on whether an attention technique is designed to handle specific types of feature vectors (feature-related), specific types of model queries (query-related), or whether it is simply a general mechanism that is related to neither the feature model, nor the query model (general). Further explanations of these categories and their subcategories are provided in the following subsections. Each mechanism discussed in this section is either a modification to the existing inner mechanisms of the general attention module presented in Section 2, or an extension of it. The presented taxonomy can also be used to analyze the architecture of attention models. Namely, the major categories and their subcategories can be interpreted as 4 Attention Mechanisms Feature-Related Query-Related General Multiplicity Levels Representations Scoring Alignment Multiplicity Type DimensionalitySingular Features Attention Coarse-Grained Co-Attention Fine-Grained Co-Attention Multi-Grained Co-Attention Rotatory Attention Single-Level Attention Attention-via-Attention Hierarchical Attention Single-Representational Attention Multi-Representational AttentionAdditive Scoring Multiplicative Scoring Scaled Multiplicative Scoring General Scoring Biased General Scoring Activated General Scoring Similarity Scoring Global/Soft Alignment Hard Alignment Local Alignment Reinforced Alignment Single-Dimensional Attention Multi-Dimensional AttentionBasic Queries Specialized Queries Self-Attentive QueriesAlternating Co-Attention Interactive Co-Attention Parallel Co-Attention Singular Query Attention Multi-Head Attention Multi-Hop Attention Capsule-Based Attention Fig. 3. A taxonomy of attention mechanisms. TABLE 1 Notation. Symbol Description F Matrix of size dfnfcontaining the feature vectors f1, . . . ,fnfRdfas columns. These feature vectors are extracted by the feature model. K Matrix of size dknfcontaining the key vectors k1, . . . ,knfRdkas columns. These vectors are used to calculate the attention scores. V Matrix of size dvnfcontaining the value vectors v1, . . . ,vnfRdvas columns. These vectors are used to calculate the context vector. WK Weights matrix of size dkdfused to create the K matrix from the Fmatrix. WV Weights matrix of size dvdfused to create the V matrix from the Fmatrix. q Query vector of size dq. This vector essentially represents a question, and is used to calculate the attention scores. c Context vector of size dv. This vector is the output of the attention model. e Score vector of size dnfcontaining the attention scores e1, . . . , e nfR1. These are used to calculate the attention weights. a Attention weights vector of size dnfcontaining the attention weights a1, . . . , a nfR1. These are the weights used in the calculation of the context vector. orthogonal dimensions of an attention model. An attention model can consist of a combination of techniques taken from any or all categories. Some characteristics, such as the scoring and alignment functions, are generally required for any attention model. Other mechanisms, such as multihead attention or co-attention are not necessary in every situation. Lastly, in Table 1, an overview of used notation with corresponding descriptions is provided. 3.1 Feature-Related Attention Mechanisms Based on a particular set of input data, a feature model extracts feature vectors so that the attention model canattend to these various vectors. These features may have specific structures that require special attention mechanisms to handle them. These mechanisms can be categorized to deal with one of the following feature characteristics: the multiplicity of features, the levels of features, or the representations of features. 3.1.1 Multiplicity of Features For most tasks, a model only processes a single input, such as an image, a sentence, or an acoustic sequence. We refer to such a mechanism as singular features attention . Other models are designed to use attention based on multiple inputs to allow one to introduce more information into the model that can be exploited in various ways. However, this does imply the presence of multiple feature matrices that require special attention mechanisms to be fully used. For example, introduces a concept named co-attention to allow the proposed visual question answering (VQA) model to jointly attend to both an image and a question. Co-attention mechanisms can generally be split up into two groups : coarse-grained co-attention and fine-grained co-attention . The difference between the two groups is the way attention scores are calculated based on the two feature matrices. Coarse-grained attention mechanisms use a compact representation of one feature matrix as a query when attending to the other feature vectors. Fine-grained co-attention, on the other hand, uses all feature vectors of one input as queries. As such, no information is lost, which is why these mechanisms are called fine-grained. As an example of coarse-grained co-attention, proposes an alternating co-attention mechanism that uses the context vector (which is a compact representation) from one attention module as the query for the other module, and vice versa. Alternating co-attention is presented in Fig. 4. Given a set of two input matrices X(1)andX(2), features are extracted by a feature model to produce the feature matrices F(1)Rd(1) fn(1) fandF(2)Rd(2) fn(2) f, whered(1) f 5 Attention Module1Attention Module 2(1)(2) (0)(2)Attention Module1(1) (1) Fig. 4. An illustration of alternating co-attention. andd(2) frepresent, respectively, the dimension of the feature vectors extracted from the first and second inputs, while n(1) fandn(2) frepresent, respectively, the amount of feature vectors extracted from the first and second inputs. In , co-attention is used for VQA, so the two input matrices are the image data and the question data, for which the feature model for the image consists of a CNN model, and the feature model for the question consists of word embeddings, a convolutional layer, a pooling layer, and an LSTM model. Firstly, attention is calculated for the first set of features F(1) without the use of a query (Attention Module 1in Fig. 4). In , an adjusted additive attention score function is used for this attention mechanism. The general form of the regular additive score function can be seen in (6). score (q dq1,kl dk1) =wT 1dwact(W1 dwdqq dq1+W2 dwdkkl dk1+b dw1),(6) where act ()is a non-linear activation function, and w Rdw,W1Rdwdq,W2Rdwdk, and bRdware trainable weights matrices, for which dwis a predefined dimension of the weight matrices. A variant of this score function adapted to be calculated without a query for the application at hand can be seen in (7). e(0) l 11=w(1)T 1dwact(W(1) dwd(1) kk(1) l d(1) k1+b(1) dw1), (7) where w(1)Rdw,W(1)Rdwd(1) k, andb(1)Rdware trainable weight matrices for Attention Module 1,k(1) l Rd(1) kis thelth column of the keys matrix K(1)that was obtained from F(1)via a linear transformation (see (1)), for whichdwis a prespecified dimension of the weight matrices andd(1) kis a prespecified dimension of the key vectors. Perhaps one may wonder why the query is absent when calculating attention in this manner. Essentially, the query in this attention model is learned alongside the other trainable parameters. As such, the query can be interpreted as a general question: Which feature vectors contain the most important information?. This is also known as a selfattentive mechanism , since attention is calculated based only on the feature vectors themselves. Self-attention is explained in more detail in Subsection 3.3.1. The scores are combined with an alignment function (see (3)), such as the softmax function, to create attention weights used to calculate the context vector c(0)Rd(1) v (see (4)). This context vector is not used as the output of the attention model, but rather as a query for calculating the context vector c(2)Rd(2) v, based on the second feature matrix F(2), whered(2) vis the dimension of the value vectors obtained from F(2)via a linear transformation (see (1)). For Attention Module 1Attention Module 2(1)(2) (1)(2) (1)(2)Mean Mean (1)(2)Fig. 5. An illustration of interactive co-attention. this module (Attention Module 2in Fig. 4), attention scores are calculated using another score function with c0as query input, as presented in (8). Any function can be used in this situation, but an additive function is used in . e(2) l 11=score (c(0) d(1) v1,k(2) l d(2) k1). (8) These attention scores are then used to calculate attention weights using, for example, a softmax function as alignment function, after which the context vector c(2)can be derived as a weighted average of the second set of value vectors. Finally, the context vector c(2)is used as a query for the first attention module, which will produce the context vector c(1)for the first feature matrix F(1). Attention scores are calculated according to (9). In , the same function and weight matrices as seen in (7) are used, but with an added query making it the same as the general additive score function (see (6)). The rest of the attention calculation is similar as before. e(1) l 11=score (c(2) d(2) v1,k(1) l d(1) k1). (9) The produced context vectors c(1)andc(2)are concatenated and used for prediction in the output model. Alternating co-attention inherently contains a form of sequentiality due to the fact that context vectors need to be calculated one after another. This may come with a computational disadvantage since it is not possible to parallelize. Instead of using a sequential mechanism like alternating co-attention, proposes the interactive co-attention mechanism that can calculate attention on both feature matrices in parallel, as depicted in Fig. 5. Instead of using the context vectors as queries, unweighted averages of the key vectors are used as queries. The calculation of the average keys are provided in (10), and the calculation of the attention scores are shown in (11). Any score function can be used in this case, but an additive score function is used in . k(1) d(1) k1=1 n(1) fn(1) f l=1k(1) l d(1) k1, k(2) d(2) k1=1 n(2) fn(2) f l=1k(2) l d(2) k1;(10) e(1) l 11=score (k(2) d(2) k1,k(1) l d(1) k1), e(2) l 11=score (k(1) d(1) k1,k(2) l d(2) k1).(11) From the attention scores, attention weights are created via an alignment function, and are used to produce the context vectors c(1)andc(2). While coarse-grained co-attention mechanisms use a compact representation of one input to use as a query when 6 calculating attention for another input, fine-grained coattention considers every element of each input individually when calculating attention scores. In this case, the query becomes a matrix. An example of fine-grained co-attention isparallel co-attention . Similarly to interactive coattention, parallel co-attention calculates attention on the two feature matrices at the same time, as shown in Fig. 6. We start by evaluating the keys matrices K(1)Rd(1) kn(1) fand K(2)Rd(2) kn(2) fthat are obtained by linearly transforming the feature matrices F(1)andF(2), whered(1) kandd(2) kare prespecified dimensions of the keys. The idea is to use the keys matrix from one input as the query for calculating attention on the other input. However, since K(1)andK(2) have completely different dimensions, an affinity matrix ARn(1) fn(2) fis calculated that is used to essentially translate one keys matrix to the space of the other keys. In , Ais calculated as shown in (12). A n(1) fn(2) f=act(K(1)T n(1) fd(1) kWA d(1) kd(2) kK(2) d(2) kn(2) f), (12) where WARd(1) kd(2) kis a trainable weights matrix and act()is an activation function for which the tanh ()function is used in . proposes a different way of calculating this matrix, i.e., one can use (13) to calculate each individual elementAi,jof the matrix A. Ai,j 11=wT A 13dkconcat (k(1) i dk1,k(2) j dk1,k(1) i dk1k(2) j dk1), (13) where wAR3dkdenotes a trainable vector of weights, concat ()denotes vector concatenation, and denotes element-wise multiplication, also known as the Hadamard product. Note that the keys of each keys matrix in this case must have the same dimension dkfor the elementwise multiplication to work. The affinity matrix can be interpreted as a similarity matrix for the columns of the two keys matrices, and helps translate, for example, image keys to the same space as the keys of the words in a sentence, and vice versa. The vectors of attention scores e(1)ande(2) can be calculated using an altered version of the additive score function as presented in (14) and (15). The previous attention score examples in this survey all used a score function to calculate each attention score for each value vector individually. However, (14) and (15) are used to calculate the complete vector of all attention scores. Essentially, the attention scores are calculated in an aggregated form. e(1) 1n(1) f=w1 1dwact(W2 dwd(2) kK(2) d(2) kn(2) fAT n(2) fn(1) f+W1 dwd(1) kK(1) d(1) kn(1) f);(14) e(2) 1n(2) f=w2 1dwact(W1 dwd(1) kK(1) d(1) kn(1) fA n(1) fn(2) f+W2 dwd(2) kK(2) d(2) kn(2) f),(15) where w1Rdw,w2Rdw,W1Rdwd(1) k, and W2Rdwd(2) kare trainable weight matrices, for which dwis a prespecified dimension of the weight matrices. Note that tanh ()is used in for the activation function, and the feature matrices are used as the key matrices. In that case, the affinity matrix Acan be seen as a translator between feature spaces. As mentioned before, the affinity matrix is essentially a similarity matrix for the key vectors of the two Attention Module 1Attention Module 2(1)(2) (1)(2) (1)(2)Affinity Matrix Fig. 6. An illustration of parallel co-attention. inputs. In , this fact is used to propose a different way of determining attention scores. Namely, one could take the maximum similarity value in a row or column as the attention score, as shown in (16). e(1) i 11= max j=1,...,n(2) fAi,j 11, e(2) j 11= max i=1,...,n(1) fAi,j 11. (16) Next, the attention scores are used to calculate attention weights using an alignment function, so that two context vectors c(1)andc(2)can be derived as weighted averages of the value vectors that are obtained from linearly transforming the features. For the alignment function, proposes to use a softmax function, and the value vectors are simply set equal to the feature vectors. The resulting context vectors can be either concatenated or added together. Finally, coarse-grained and fine-grained co-attention can be combined to create an even more complex co-attention mechanism. proposes the multi-grained co-attention mechanism that calculates both coarse-grained and finegrained co-attention for two inputs. Each mechanism produces one context vector per input. The four resulting context vectors are concatenated and used in the output model for prediction. A mechanism separate from co-attention that still uses multiple inputs is the rotatory attention mechanism . This technique is typically used in a text sentiment analysis setting where there are three inputs involved: the phrase for which the sentiment needs to be determined (target phrase), the text before the target phrase (left context), and the text after the target phrase (right context). The words in these three inputs are all encoded by the feature model, producing the following feature matrices: Ft= [ft 1,...,ft nt f]Rdt fnt f, Fl= [fl 1,...,fl nl f]Rdl fnl f, andFr= [fr 1,...,fr nr f] Rdr fnr f, for the target phrase words, left context words, and right context words, respectively, where dt f,dl f, and dr frepresent the dimensions of the feature vectors for the corresponding inputs, and nt f,nl f, andnr frepresent the number of feature vectors for the corresponding inputs. The feature model used in consists of word embeddings and separate Bi-LSTM models for the target phrase, the left context, and the right context. This means that the feature vectors are in fact the hidden state vectors obtained from the Bi-LSTM models. Using these features, the idea is to extract a single vector rfrom the inputs such that a softmax layer can be used for classification. As such, we are now faced with two challenges: how to represent the inputs as a single vector, and how to incorporate the information from the left and right context into that vector. proposes to use the rotatory attention mechanism for this purpose. 7 Firstly, a single target phrase representation is created by using a pooling layer that takes the average over the columns of Ft, as shown in (17). rt dt f1=1 nt fnt f i=1ft i dt f1. (17) rtis then used as a query to create a context vector out of the left and right contexts, separately. For example, for the left context, the key vectors kl 1,...,kl nl fRdl kand value vectors vl 1,...,vl nl fRdl vare extracted from the left context feature vectors fl 1,...,fl nl fRdl f, similarly as before, where dl kanddl vare the dimensions of the key and value vectors, respectively. Note that proposes to use the original feature vectors as keys and values, meaning that the linear transformation consists of a multiplication by an identity matrix. Next, the scores are calculated using (18). el i 11=score (rt dt f1,kl i dl k1). (18) For the score function, proposes to use an activated general score function with a tanh activation function. The attention scores can be combined with an alignment function and the corresponding value vectors to produce the context vector rlRdl v. The alignment function used in takes the form of a softmax function. An analogous procedure can be performed to obtain the representation of the right context, rr. These two context representations can then be used to create new representations of the target phrase, again, using attention. Firstly, the key vectors kt 1,...,kt nt f Rdt kand value vectors vt 1,...,vt nt fRdt vare extracted from the target phrase feature vectors ft 1,...,ft nt fRdt f, similarly as before, using a linear transformation, where dt k anddt vare the dimensions of the key and value vectors, respectively. Note, again, that the original feature vectors as keys and values in . The attention scores for the leftaware target representation are then calculated using (19). elt i 11=score (rl dl v1,kt i dt k1). (19) The attention scores can be combined with an alignment function and the corresponding value vectors to produce the context vector rltRdt v. For this attention calculation, proposes to use the same score and alignment functions as before. The right-aware target representation rrtcan be calculated in a similar manner. Finally, to obtain the full representation vector rthat is used to determine the classification, the vectors rl,rr,rlt, andrrtare concatenated together, as shown in (20). r (dl v+dr v+dt v+dt v)1=concat (rl dl v1,rr dr v1,rlt dt v1,rrt dt v1). (20) To summarize, rotatory attention uses the target phrase to compute new representations for the left and right context using attention, and then uses these left and right representations to calculate new representations for the target phrase. The first step is designed to capture the words in the left and right contexts that are most important to the target phrase. The second step is there to capture themost important information in the actual target phrase itself. Essentially, the mechanism rotates attention between the target and the contexts to improve the representations. There are many applications where combining information from different inputs into a single model can be highly beneficial. For example, in the field of medical data, there are often many different types of data available, such as various scans or documents, that can provide different types of information. In , a co-attention mechanism is used for automatic medical report generation to attend to both images and semantic tags simultaneously. Similarly, in , a co-attention model is proposed that combines general demographics features and patient medical history features to predict future health information. Additionally, an ablation study is used in to show that the co-attention part of the model specifically improves performance. A field where multi-feature attention has been extensively explored is the domain of recommender systems. For example, in , a coattention network is proposed that attends to both product reviews and the reviews a user has written. In , a model is proposed for video recommendation that attends to both user features and video features. Co-attention techniques have also been used in combination with graph networks for the purpose of, for example, reading comprehension across multiple documents and fake news detection . In comparison to co-attention, rotatory attention has typically been explored only in the field of sentiment analysis, which is most likely due to the specific structure of the data that is necessary to use this technique. An implementation of rotatory attention is proposed in for sentiment analysis, where the mechanism is extended by repeating the attention rotation to iteratively further improve the representations. 3.1.2 Feature Levels The previously discussed attention mechanisms process data at a single level. We refer to these attention techniques assingle-level attention mechanisms. However, some data types can be analyzed and represented on multiple levels. For example, when analyzing documents, one can analyze the document at the sentence level, word level, or even the character level. When representations or embeddings of all these levels are available, one can exploit the extra levels of information. For example, one could choose to perform translation based on either just the characters, or just the words of the sentence. However, in , a technique named attention-via-attention is introduced that allows one to incorporate information from both the character, and the word levels. The idea is to predict the sentence translation character-by-character, while also incorporating information from a word-level attention module. To begin with, a feature model (consisting of, for example, word embeddings and RNNs) is used to encode the input sentence into both a character-level feature matrixF(c)Rd(c) fn(c) f, and a word-level feature matrix F(w)Rd(w) fn(w) f, whered(c) fandn(c) frepresent, respectively, the dimension of the embeddings of the characters, and the number of characters, while d(w) fandn(w) frepresent the same but at the word level. It is crucial for this method that each level in the data can be represented or embedded. When attempting to predict a character in the translated 8 Attention Module WAttention Module C Fig. 7. An illustration of attention-via-attention. sentence, a query q(c)Rdqis created by the query model (like a character-level RNN), where dqis the dimension of the query vectors. As illustrated in Fig. 7, the query is used to calculate attention on the word-level feature vectors F(w). This generates the context vector c(w)Rd(w) v, whered(w) v represents the dimension of the value vectors for the wordlevel attention module. This context vector summarizes which words contain the most important information for predicting the next character. If we know which words are most important, then it becomes easier to identify which characters in the input sentence are most important. Thus, the next step is to attend to the character-level features in F(c), with an additional query input: the word-level context vector c(w). The actual query input for the attention model will therefore be the concatenation of the query q(c)and the word context vector c(w). The output of this character-level attention module is the context vector c(c). The complete context output of the attention model is the concatenation of the word-level, and character-level context vectors. The attention-via-attention technique uses representations for each level. However, accurate representations may not always be available for each level of the data, or it may be desirable to let the model create the representations during the process by building them from lower level representations. A technique referred to as hierarchical attention can be used in this situation. Hierarchical attention is another technique that allows one to apply attention on different levels of the data. Yet, the exact mechanisms work quite differently compared to attention-via-attention. The idea is to start at the lowest level, and then create representations, or summaries, of the next level using attention. This process is repeated till the highest level is reached. To make this a little clearer, suppose one attempts to create a model for document classification, similarly to the implementation from . We analyze a document containing nSsentences, with the sth sentence containing nswords, fors= 1,...,n S. One could use attention based on just the collection of words to classify the document. However, a significant amount of important context is then left out of the analysis, since the model will consider all words as a single long sentence, and will therefore not consider the context within the separate sentences. Instead, one can use the hierarchical structure of a document (words form sentences, and sentences form the document). Fig. 8 illustrates the structure of hierarchical attention. For each sentence in the document, a sentence representationc(s)Rd(S) vis produced, for s= 1,...,n S, whered(S) v is the dimension of the value vectors used in the attention model for sentence representations (Attention Module Sin (1) Attention ModuleS ()Attention ModuleS Attention ModuleS Attention Module D(2) () (2)(1) ()Fig. 8. An illustration of hierarchical attention. Fig. 8). The representation is a context vector from an attention module that essentially summarizes the sentence. Each sentence is first put through a feature model to extract the feature matrix F(s)Rd(S) fns, fors= 1,...,n S, where d(S) frepresents the dimension of the feature vector for each word, andnsrepresents the amount of words in sentence s. For extra clarification, the columns of F(s)are feature vectors that correspond to the words in sentence s. As shown in Fig. 8, each feature matrix F(s)is used as input for an attention model, which produces the context vector c(s), for each s= 1,...,n S. No queries are used in this step, so it can be considered a self-attentive mechanism. The context vectors are essentially summaries of the words in the sentences. The matrix of context vectors C= [c(1),...,c(nS)]Rd(S) vnS is constructed by grouping all the obtained context vectors together as columns. Finally, attention is calculated using C as feature input, producing the representation of the entire document in the context vector c(D)Rd(D) v, whered(D) vis the dimension of the value vectors in the attention model for document representation (Attention Module Din Fig. 8). This context vector can be used to classify the document, since it is essentially a summary of all the sentences (and therefore also the words) in the document. Multi-level models can be used in a variety of tasks. For example, in , hierarchical attention is used in a recommender system to model user preferences at the longterm level and the short-term level. Similarly, proposes a hierarchical model for recommending social media images based on user preferences. Hierarchical attention has also been successfully applied in other domains. For example, proposes to use hierarchical attention in a video action recognition model to capture motion information at the the long-term level and the short-term level. Furthermore, proposes a hierarchical attention model for cross-domain sentiment classification. In , a hierarchical attention model for chatbot response generation is proposed. Lastly, using image data, proposes a hierarchical attention model for crowd counting. 3.1.3 Feature Representations In a basic attention model, a single embedding or representation model is used to produce feature representations for the model to attend to. This is referred to as single-representational attention . Yet, one may also opt to incorporate multiple representations into the model. In , it is argued that allowing a model access to multiple 9 embeddings can allow one to create even higher quality representations. Similarly, incorporates multiple representations of the same book (textual, syntactic, semantic, visual etc.) into the feature model. Feature representations are an important part of the attention model, but attention can also be an important part of the feature model. The idea is to create a new representation by taking a weighted average of multiple representations, where the weights are determined via attention. This technique is referred to as multi-representational attention , and allows one to create so-called meta-embeddings. Suppose one wants to create a meta-embedding for a word xfor whichEembeddings x(e1),...,x(eE)are available. Each embedding x(ei)is of sizedei, fori= 1,...,E . Since not all embeddings are of the same size, a transformation is performed to normalize the embedding dimensions. Using embedding-specific weight parameters, each embedding x(ei)is transformed into the size-normalized embedding x(ti)Rdt, wheredtis the size of every transformed word embedding, as shown in (21). x(ti) dt1=Wei dtdeix(ei) dei1+bei dt1, (21) where WeiRdtdei, and beiRdtare trainable, embedding-specific weights matrices. The final embedding x(e)Rdtis a weighted average of the previously calculated transformed representations, as shown in (22). x(e) dt1=E i=1ai 11x(ti) dt1. (22) The final representation x(e)can be interpreted as the context vector from an attention model, meaning that the weightsa1,...,a ER1are attention weights. Attention can be calculated as normally, where the columns of the features matrix Fare the transformed representations x(t1),...,x(tE). The query in this case can be ignored since it is constant in all cases. Essentially, the query is Which representations are the most important? in every situation. As such, this is a self-attentive mechanism. While an interesting idea, applications of multirepresentational attention are limited. One example of the application of this technique is found in , where a multirepresentational attention mechanism has been applied to generate multi-lingual meta-embeddings. Another example is , where a multi-representational text classification model is proposed that incorporates different representations of the same text. For example, the proposed model uses embeddings from part-of-speech tagging, named entity recognizers, and character-level and word-level embeddings. 3.2 General Attention Mechanisms This major category consists of attention mechanisms that can be applied in any type of attention model. The structure of this component can be broken down into the following sub-aspects: the attention score function, the attention alignment, and attention dimensionality. 3.2.1 Attention Scoring The attention score function is a crucial component in how attention is calculated. Various approaches have been developed that each have their own advantages and disadvantages. An overview of these functions is provided in Table 2.TABLE 2 Overview of score function (score (q,kl)) forms. Name Function Parameters Additive (Concatenate) wTact(W1q+W2kl) +b)wRdw W1Rdwdq W2Rdwdk bRdw Multiplicative (Dot-Product) qTkl Scaled Multiplicative qTkl dkGeneral kT lWq W Rdkdq Biased General kT l(Wq+b) WRdkdq bRdk Activated General act (kT lWq+b) WRdkdq, bR1 Similarity similarity (q,kl) Each row of Table 2 presents a possible form for the function score (q,kl), as seen in (23), where qis the query vector, and klis thelth column of K. Note that the score functions presented in this section can be more efficiently calculated in matrix form using Kinstead of each column separately. Nevertheless, the score functions are presented using klto more clearly illustrate the relation between a key and query. el 11=score (q dq1,kl dk1). (23) Due to their simplicity, the most popular choices for the score function are the concatenate score function and the multiplicative score function . The multiplicative score function has the advantage of being computationally inexpensive due to highly optimized vector operations. However, the multiplicative function may produce non-optimal results when the dimension dkis too large . When dk is large, the dot-product between qandklcan grow large in magnitude. To illustrate this, in , an example is used where the elements of qandklare all normally distributed with a mean equal to zero, and a variance equal to one. Then, the dot-product of the vectors has a variance of dk. A higher variance means a higher chance of numbers that are large in magnitude. When the softmax function of the alignment step is then applied using these large numbers, the gradient will become very small, meaning the model will have trouble converging . To adjust for this, proposes to scale the multiplicative function by the factor 1dk, producing the scaled multiplicative score function. In , the multiplicative score function is extended by introducing a weights matrix W. This form, referred to as the general score function, allows for an extra transformation of kl. The biased general score function is a further extension of the general function that introduces a bias weight vector b. A final extension on this function named the activated general score function is introduced in , and includes the use of both a bias weight b, and an activation function act (). The previously presented score functions are all based on determining a type of similarity between the key vector and the query vector. As such, more typical similarity measures, such as the Euclidean (L 2) distance and cosine similarity, can also be implemented . These scoring methods are summarized under the similarity score function which is represented by the similarity ()function. 10 There typically is no common usage across domains regarding score functions. The choice of score function for a particular task is most often based on empirical experiments. However, there are exceptions when, for example, efficiency is vital. In models where this is the case, the multiplicative or scaled multiplicative score functions are typically the best choice. An example of this is the Transformer model, which is generally computationally expensive. 3.2.2 Attention Alignment The attention alignment is the step after the attention scoring. This alignment process directly determines which parts of the input data the model will attend to. The alignment function is denoted as align ()and has various forms. The align ()function takes as input the previously calculated attention score vector eand calculates for each element elof ethe attention weight al. These attention weights can then be used to create the context vector cby taking a weighted average of the value vectors v1,...,vnf: c dv1=nf l=1al 11vl dv1. (24) The most popular alignment method to calculate these weights is a simple softmax function, as depicted in (25). al 11=align (el 11;e nf1) =exp(el)nf j=1exp(ej). (25) This alignment method is often referred to as soft alignment in computer vision settings , or global alignment for sequence data . Nevertheless, both these terms represent the same function and can be interpreted similarly. Soft/global alignment can be interpreted as the model attending to all feature vectors. For example, the model attends to all regions in an image, or all words in a sentence. Even though the attention model generally does focus more on specific parts of the input, every part of the input will receive at least some amount of attention due to the nature of the softmax function. Furthermore, an advantage of the softmax function is that it introduces a probabilistic interpretation to the input vectors. This allows one to easily analyze which parts of the input are important to the output predictions. In contrast to soft/global alignment, other methods aim to achieve a more focused form of alignment. For example, hard alignment , also known as hard attention or nondeterministic attention, is an alignment type that forces the attention model to focus on exactly one feature vector. Firstly, this method implements the softmax function in the exact same way as global alignment. However, the outputs a1,...,a nfare not used as weights for the context vector calculation. Instead, these values are used as probabilities to draw the choice of the one value vector from. A value mR1is drawn from a multinomial distribution with a1,...,a nfas parameters for the probabilities. Then, the context vector is simply defined as follows: c dv1=vm dv1. (26) Hard alignment is typically more efficient at inference compared to soft alignment. On the other hand, the main disadvantage of hard attention is that, due to the stochasticalignment of attention, the training of the model cannot be done via the regular backpropagation method. Instead, simulation and sampling, or reinforcement learning are required to calculate the gradient at the hard attention layer. As such, soft/global attention is generally preferred. However, a compromise can be made in certain situations. Local alignment is a method that implements a softmax distribution, similarly to soft/global alignment. But, the softmax distribution is calculated based only on a subset of the inputs. This method is generally used in combination with sequence data. One has to specify a variable pR1 that determines the position of the region. Feature vectors close topwill be attended to by the model, and vectors too far from pwill be ignored. The size of the subset will be determined by the variable DR1. Summarizing, the attention model will apply a softmax function on the attention scores in the subset [pD,p+D]. In other words, a window is placed on the input and soft/global attention is calculated within that window: al 11=align (el 11;e nf1) =exp(el) p+D j=pDexp(ej). (27) The question that remains is how to determine the location parameterp. The first method is referred to as monotonic alignment . This straightforward method entails simply setting the location parameter equal to the location of the prediction in the output sequence. Another method of determining the position of the region is referred to as predictive alignment . As the name entails, the model attempts to actually predict the location of interest in the sequence: p 11=S 11sigmoid (wT p 1dptanh(Wp dpdqq dq1)), (28) whereSR1is the length of the input sequence, and wpRdpandWpRdpdqare both trainable weights parameters. The sigmoid function multiplied by Smakes sure thatpis in the range [0,S]. Additionally, in , it is recommended to add an additional term to the alignment function to favor alignment around p: al 11=align (el 11;e nf1)exp((lp)2) 22), (29) whereR1is empirically set equal toD 2according to . Another proposed method for compromising between soft and hard alignment is reinforced alignment . Similarly to local alignment, a subset of the feature vectors is determined, for which soft alignment is calculated. However, instead of using a window to determine the subset, reinforced alignment uses a reinforcement learning agent , similarly to hard alignment, to choose the subset of feature vectors. The attention calculation based on these chosen feature vectors is the same as regular soft alignment. Soft alignment is often regarded as the standard alignment function for attention models in practically every domain. Yet, the other alignment methods have also seen interesting uses in various domains. For example, hard attention is used in for the task of visual question answering. In , both soft and hard attention are used in a graph attention model for multi-agent game abstraction. Similarly, in , both global and local alignment are used for review rating predictions. Reinforced alignment has been employed 11 in combination with a co-attention structure in for the task of aspect sentiment classification. In , reinforced alignment is used for the task of person re-identification using surveillance images. 3.2.3 Attention Dimensionality All previous model specifications of attention use a scalar weightalfor each value vector vl. This technique is referred to as single-dimensional attention . However, instead of determining a single attention score and weight for the entire vector, proposes to calculate weights for every single feature in those vectors separately. This technique is referred to as multi-dimensional attention , since the attention weights now become higher dimensional vectors. The idea is that the model no longer has to attend to entire vectors, but it can instead pick and choose specific elements from those vectors. More specifically, attention is calculated for each dimension. As such, the model must create a vector of attention weights alRdvfor each value vector vlRdv. The context vector can then be calculated by summing the element-wise multiplications ( ) of the value vectors v1,...,vnfRdvand the corresponding attention weight vectors a1,...,anfRdv, as follows: c dv1=nf l=1al dv1vl dv1. (30) However, since one needs to create attention weight vectors, this technique requires adjusted attention score and weight calculations. For example, the concatenate score function found in Table 2 can be adjusted by changing the wRdw weights vector to the weight matrix WdRdwdv: el dv1=WT d dvdwact(W1 dwdqq dq1+W2 dwdkkl dk1+b dw1).(31) This new score function produces the attention score vectors e1,...,enfRdv. These score vectors can be combined into a matrix of scores e= [e1,...,enf]Rdvnf. To produce multi-dimensional attention weights, the alignment function stays the same, but it is applied for each feature across the attention score columns. To illustrate, when implementing soft attention, the attention weight produced from theith element of score vector elis defined as follows: al,i 11=align (el,i 11;e dvnf) =exp(el,i)nf j=1exp(ej,i), (32) whereel,irepresents the ith element of score vector el, andal,iis theith element of the attention weights vector al. Finally, these attention weight vectors can be used to compute the context vector as presented in (30). Multi-dimensional attention is a very general mechanism that can be applied in practically every attention model, but actual applications of the technique have been relatively sparse. One application example is , where multi-dimensional attention is used in a model for named entity recognition based on text and visual context from multimedia posts. In , multi-dimensional attention is used in a model for answer selection in community question answering. In , the U-net model for medical image segmentation is extended with a multi-dimensional attention mechanism. Similarly, in , the Transformer model isextended with the multi-dimensional attention mechanism for the task of dialogue response generation. In , multidimensional attention is used to extend graph attention networks for dialogue state tracking. Lastly, for the task of next-item recommendation, proposes a model that incorporates multi-dimensional attention. 3.3 Query-Related Attention Mechanisms Queries are an important part of any attention model, since they directly determine which information is extracted from the feature vectors. These queries are based on the desired output of the task model, and can be interpreted as literal questions. Some queries have specific characteristics that require specific types of mechanisms to process them. As such, this category encapsulates the attention mechanisms that deal with specific types of query characteristics. The mechanisms in this category deal with one of the two following query characteristics: the type of queries or the multiplicity of queries. 3.3.1 Type of Queries Different attention models employ attention for different purposes, meaning that distinct query types are necessary. There are basic queries , which are queries that are typically straightforward to define based on the data and model. For example, the hidden state for one prediction in an RNN is often used as the query for the next prediction. One could also use a vector of auxiliary variables as query. For example, when doing medical image classification, general patient characteristics can be incorporated into a query. Some attention mechanisms, such as co-attention, rotatory attention, and attention-over-attention, use specialized queries . For example, rotatory attention uses the context vector from another attention module as query, while interactive co-attention uses an averaged keys vector based on another input. Another case one can consider is when attention is calculated based purely on the feature vectors. This concept has been mentioned before and is referred to asself-attention orintra-attention . We say that the models use self-attentive queries . There are two ways of interpreting such queries. Firstly, one can say that the query is constant. For example, document classification requires only a single classification as the output of the model. As such, the query is always the same, namely: What is the class of the document?. The query can be ignored and attention can be calculated based only on the features themselves. Score functions can be adjusted for this by making the query vector a vector of constants or removing it entirely: score (kl dk1) =wT 1dwact(W dwdkkl dk1+b dw1). (33) Additionally, one can also interpret self-attention as learning the query along the way, meaning that the query can be defined as a trainable vector of weights. For example, the dot-product score function may take the following form: score (kl dk1) =qT 1dkkl dk1, (34) where qRdkis a trainable vector of weights. One could also interpret vector bRdwas the query in (33). Another 12 use of self-attention is to uncover the relations between the feature vectors f1,...,fnf. These relations can then be used as additional information to incorporate into new representations of the feature vectors. With basic attention mechanisms, the keys matrix K, and the values matrix V are extracted from the features matrix F, while the query qis produced separately. For this type of self-attention, the query vectors are extracted in a similar process as the keys and values, via a transformation matrix of trainable weights WQRdqdf. We define the matrix Q= [q1,...,qnf] Rdqnf, which can be obtained as follows: Q dqnf=WQ dqdfF dfnf. (35) Each column of Qcan be used as the query for the attention model. When attention is calculated using a query q, the resulting context vector cwill summarize the information in the feature vectors that is important to the query. Since the query, or a column of Q, is now also a feature vector representation, the context vector contains the information of all feature vectors that are important to that specific feature vector. In other words, the context vectors capture the relations between the feature vectors. For example, selfattention allows one to extract the relations between words: which verbs refer to which nouns, which pronouns refer to which nouns, etc. For images, self-attention can be used to determine which image regions relate to each other. While self-attention is placed in the query-related category, it is also very much related to the feature model. Namely, self-attention is a technique that is often used in the feature model to create improved representations of the feature vectors. For example, the Transformer model for language processing , and the Transformer model for image processing , both use multiple rounds of (multihead) self-attention to improve the representation of the feature vectors. The relations captured by the self-attention mechanism are incorporated into new representations. A simple method of determining such a new representation is to simply set the feature vectors equal to the acquired self-attention context vectors , as presented in (36). f(new) df1=c df1, (36) where f(new)is the updated feature vector. Another possibility is to add the context vectors to the previous feature vectors with an additional normalization layer : f(new) df1=Normalize (f(old) df1+c df1), (37) where f(old)is the previous feature vector, and Normalize () is a normalization layer . Using such techniques, selfattention has been used to create improved word or sentence embeddings that enhance model accuracy . Self-attention is arguably one of the more important types of attention, partly due to its vital role in the highly popular Transformer model. Self-attention is a very general mechanism and can be applied to practically any problem. As such, self-attention has been extensively explored in many different fields in both Transformer-based architectures and other types of models. For example, in , selfattention is explored for image recognition tasks, and resultsindicate that the technique may have substantial advantages with regards to robustness and generalization. In , selfattention is used in a generative adversarial network (GAN) to determine which regions of the input image to focus on when generating the regions of a new image. In , selfattention is used to design a state-of-the-art medical image segmentation model. Naturally, self-attention can also be used for video processing. In , a self-attention model is proposed for the purpose of video summarization that reaches state-of-the-art results. In other fields, like audio processing, self-attention has been explored as well. In , self-attention is used to create a speech recognition model. Self-attention has also been explored in overlapping domains. For example, in , the self-attention Transformer architecture is used to create a model that can recognize phrases from audio and by lip-reading from a video. For the problem of next item recommendation, proposes a Transformer model that explicitly captures item-item relations using self-attention. Self-attention also has applications in any natural language processing fields. For example, in , self-attention is used for sentiment analysis. Selfattention is also highly popular for graph models. For example, self-attention is explored in for the purpose of representation learning in communication networks and rating networks. Additionally, the first attention model for graph networks was based on self-attention . 3.3.2 Multiplicity of Queries In previous examples, the attention model generally used a single query for a prediction. We say that such models usesingular query attention . However, there are attention architectures that allow the model to compute attention using multiple queries. Note that this is different from, for example, an RNN that may involve multiple queries to produce a sequence of predictions. Namely, such a model still requires only a single query per prediction. One example of a technique that incorporates multiple queries is multi-head attention , as presented in Fig. 9. Multi-head attention works by implementing multiple attention modules in parallel by utilizing multiple different versions of the same query. The idea is to linearly transform the query qusing different weight matrices. Each newly formed query essentially asks for a different type of relevant information, allowing the attention model to introduce more information into the context vector calculation. An attention model implements d1heads with each attention head having its own query vector, keys matrix, and values matrix: q(j),K(j)andV(j), forj= 1,...,d . The query q(j)is obtained by linearly transforming the original query q, while the matrices K(j)andV(j)are obtained through linear transformations of F. As such, each attention head has its own learnable weights matrices W(j) q,W(j) Kand W(j) Vfor these transformations. The calculation of the query, keys, and values for the jth head are defined as follows: q(j) dq1=W(j) q dqdqq dq1, K(j) dknf=W(j) K dkdfF dfnf, V(j) dvnf=W(j) V dvdfF dfnf.(38) Thus, each head creates its own representations of the query q, and the input matrix F. Each head can therefore 13 Attention Head 1Attention Head 2Attention Head d-1Attention Head d Fig. 9. An illustration of multi-head attention. learn to focus on different parts of the inputs, allowing the model to attend to more information. For example, when training a machine translation model, one attention head can learn to focus on which nouns (e.g., student, car, apple) do certain verbs (e.g., walking, driving, buying) refer to, while another attention head learns to focus on which nouns refer to certain pronouns (e.g., he, she, it) . Each head will also create its own vector of attention scores e(j)= [e(j) 1,...,e(j) nf]Rnf, and a corresponding vector of attention weights a(j)= [a(j) 1,...,a(j) nf]Rnf. As can be expected, each attention model produces its own context vector c(j)Rdv, as follows: c(j) dv1=nf l=1a(j) l 11v(j) l dv1. (39) The goal is still to create a single context vector as output of the attention model. As such, the context vectors produced by the individual attention heads are concatenated into a single vector. Afterwards, a linear transformation is applied using the weight matrix WORdcdvdto make sure the resulting context vector cRdchas the desired dimension. This calculation is presented in (40). The dimensiondccan be pre-specified by, for example, setting it equal todv, so that the context vector dimension is unchanged. c dc1=WO dcdvdconcat (c(1) dv1,...,c(d) dv1). (40) Multi-head attention processes multiple attention modules in parallel, but attention modules can also be implemented sequentially to iteratively adjust the context vectors. Each of these attention modules are referred to as repetitions or rounds of attention. Such attention architectures are referred to as multi-hop attention models , also known as multi-step attention models . An important note to consider is the fact that multi-hop attention is a mechanism that has been proposed in various forms throughout various works. While the mechanism always involves multiple rounds of attention, the multi-hop implementation proposed in differs from the mechanism proposed in or . Another interesting example is , where a multi-hop attention model is proposed that would actually be considered alternating co-attention in this survey, as explained in Subsection 3.1.1. We present a general form of multi-hop attention that is largely a generalization of the techniques introduced in and . Fig. 10 provides an example implementation of a multi-hop attention mechanism. The general idea is to iteratively transform the query, and use the query to transform the context vector, such that the model can extract different information in each step. Remember that a query Attention ModuleAttention Module Query ModelQuery ModelAttention Module Query ModelFig. 10. An example illustration of multi-hop attention. Solid arrows represent the base multi-hop model structure, while dotted arrows represent optional connections. is similar to a literal question. As such, one can interpret the transformed queries as asking the same question in a different manner or from a different perspective, similarly to the queries in multi-head attention. The query that was previously denoted by qis now referred to as the initial query, and is denoted by q(0). At hops, the current query q(s)is transformed into a new query representation q(s+1), possibly using the current context vector c(s)as another input, and some transformation function transform (): q(s+1) dq1=transform (q(s) dq1,c(s) dv1). (41) For the specific form of the transformation function transform (), proposes to use a mechanism similar to self-attention. Essentially, the queries used by the question answer matching model proposed in were originally based on a set of feature vectors extracted from a question. also defines the original query q(0)as the unweighted average of these feature vectors. At each hop s, attention can be calculated on these feature vectors using the previous query q(s)as the query in this process. The resulting context vector of this calculation is the next query vector. Using the context vector c(s)instead of q(s)as the query for this process is also a possibility, which is similar to the LCRRot-hop model proposed in and the multi-step model proposed in . Such a connection is represented by the dotted arrows in Fig. 10. The transformation mechanism uses either the q(s)or the context vector c(s)as query, but a combination via concatenation is also possible. Each query representation is used as input for the attention module to compute attention on the columns of the feature matrix F, as seen previously. One main difference, however, is that the context vector c(s)is also used as input, so that the actual query input for the attention model is the concatenation of c(s)andq(s+1). The adjusted attention score function is presented in (42). Note that the initial context vector c(0)is predefined. One way of doing this is by setting it equal to the unweighted average of the value vectors v1,...,vnfRdvextracted from F. e(s) l 11=score (concat (q(s+1) dq1,c(s) dv1),kl dk1). (42) An alignment function and the value vectors are then used to produce the next context vector c(s+1). One must note that in , the weights used in each iteration are the same weights, meaning that the number of parameters do not scale with the number of repetitions. Yet, using multiple hops with different weight matrices can also be viable, as shown by the Transformer model and in . It may be 14 difficult to grasp why c(s)is part of the query input for the attention model. Essentially, this technique is closely related to self-attention in the sense that, in each iteration, a new context representation is created from the feature vectors and the context vector. The essence of this mechanism is that one wants to iteratively alter the query and the context vector, while attending to the feature vectors. In the process, the new representations of the context vector absorb more different kinds of information. This is also the main difference between this type of attention and multi-head attention. Multi-head attention creates multiple context vectors from multiple queries and combines them to create a final context vector as output. Multi-hop attention iteratively refines the context vector by incorporating information from the different queries. This does have the disadvantage of having to calculate attention sequentially. Interestingly, due to the variations in which multi-hop attention has been proposed, some consider the Transformer models encoder and decoder to consist of several singlehop attention mechanisms instead of being a multihop model. However, in the context of this survey, we consider the Transformer model to be an alternative form of the multi-hop mechanism, as the features matrix Fis not directly reused in each step. Instead, Fis only used as an input for the first hop, and is transformed via self-attention into a new representation. The self-attention mechanism uses each feature vector in Fas a query, resulting in a matrix of context vectors as output of each attention hop. The intermediate context vectors are turned into matrices and represent iterative transformations of the matrix F, which are used in the consecutive steps. Thus, the Transformer model iteratively refines the features matrix Fby extracting and incorporating new information. When dealing with a classification task, another idea is to use a different query for each class. This is the basic principle behind capsule-based attention , as inspired by the capsule networks . Suppose we have the feature vectors f1,...,fnfRdf, and suppose there are are dy classes that the model can predict. Then, a capsule-based attention model defines a capsule for each of the dyclasses that each take as input the feature vectors. Each capsule consists of, in order, an attention module, a probability module, and a reconstruction module, which are depicted in Fig. 11. The attention modules all use self-attentive queries, so each module learns its own query: Which feature vectors are important to identify this class?. In , a self-attentive multiplicative score function is used for this purpose: ec,l 11=qT c 1dkkl dk1, (43) whereec,lR1is the attention score for vector lin capsule c, and qcRdkis a trainable query for capsule c, for c= 1,...,d y. Each attention module then uses an alignment function, and uses the produced attention weights to determine a context vector ccRdv. Next, the context vector ccis fed through a probability layer consisting of a linear transformation with a sigmoid activation function: pc 11=sigmoid (wT c 1dvcc dv1+bc 11), (44) Attention Module1Attention Module2Attention Moduledy-1Attention Moduledy 1 2 1 Probability Module1Probability Module2Probability Module dy-1Probability Module dy Reconstruction Module1Reconstruction Module21 1 22 Reconstruction Module dy-1 1 1 Reconstruction Module dy Fig. 11. An illustration of capsule-based attention. where wcRdvandbcR1are trainable capsule-specific weights parameters, and pcR1is the predicted probability that the correct class is class c. The final layer is the reconstruction module that creates a class vector representation. This representation rcRdvis determined by simply multiplying the context vector ccby the probability pc: rc dv1=pc 11cc dv1. (45) The capsule representation is used when training the model. First of all, the model is trained to predict the probabilities p1,...,p dyas accurately as possible compared to the true values. Secondly, via a joint loss function, the model is also trained to accurately construct the capsule representations r1,...,rdy. A features representation fRdfis defined which is simply the unweighted average of the original feature vectors. The idea is to train the model such that vector representations from capsules that are not the correct class differ significantly from fwhile the representation from the correct capsule is very similar to f. A dot-product between the capsule representations and the features representation is used in as a measure of the distance between the vectors. Note that dvmust equal dfin this case, otherwise the vectors would have incompatible dimensions. Interestingly, since attention is calculated for each class individually, one can track which specific feature vectors are important for which specific class. In , this idea is used to discover which words correspond to which sentiment class. The number of tasks that can make use of multiple queries is substantial, due to how general the mechanisms are. As such, the techniques described in this section have been extensively explored in various domains. For example, multi-head attention has been used for speaker recognition based on audio spectrograms . In , multihead attention is used for recommendation of news articles. Additionally, multi-head attention can be beneficial for graph attention models as well . As for multi-hop attention, quite a few papers have been mentioned before, but there are still many other interesting examples. For example, in , a multi-hop attention model is proposed for medication recommendation. Furthermore, practically every Transformer model makes use of both multi-head and multi-hop attention. The Transformer model has been extensively explored in various domains. For example, in , a Transformer model is implemented for image cap15 tioning. In , Transformers are explored for medical image segmentation. In , a Transformer model is used for emotion recognition in text messages. A last example of an application of Transformers is , which proposes a Transformer model for recommender systems. In comparison with multi-head and multi-hop attention, capsule-based attention is arguably the least popular of the mechanisms discussed for the multiplicity of queries. One example is , where an attention-based capsule network is proposed that also includes a multi-hop attention mechanism for the purpose of visual question answering. Another example is , where capsule-based attention is used for aspect-level sentiment analysis of restaurant reviews. The multiplicity of queries is a particularly interesting category due to the Transformer model , which combines a form of multi-hop and multi-head attention. Due to the initial success of the Transformer model, many improvements and iterations of the model have been produced that typically aim to improve the predictive performance, the computational efficiency, or both. For example, the Transformer-XL is an extension of the original Transformer that uses a recurrence mechanism to not be limited by a context window when processing the outputs. This allows the model to learn significantly longer dependencies while also being computationally more efficient during the evaluation phase. Another extension of the Transformer is known as the Reformer model . This model is significantly more efficient computationally, by means of localitysensitive hashing, and memory-wise, by means of reversible residual layers. Such computational improvements are vital, since one of the main disadvantages of the Transformer model is the sheer computational cost due to the complexity of the model scaling quadratically with the amount of input feature vectors. The Linformer model manages to reduce the complexity of the model to scale linearly, while achieving similar performance as the Transformer model. This is achieved by approximating the attention weights using a low-rank matrix. The Lite-Transformer model proposed in achieves similar results by implementing two branches within the Transformer block that specialize in capturing global and local information. Another interesting Transformer architecture is the Synthesizer . This model replaces the pairwise self-attention mechanism with synthetic attention weights. Interestingly, the performance of this model is relatively close to the original Transformer, meaning that the necessity of the pairwise self-attention mechanism of the Transformer model may be questionable. For a more comprehensive overview of Transformer architectures, we refer to . 4 E VALUATION OF ATTENTION MODELS In this section, we present various types of evaluation for attention models. Firstly, one can evaluate the structure of attention models using the taxonomy presented in Section 3. For such an analysis, we consider the attention mechanism categories (see Fig. 3) as orthogonal dimensions of a model. The structure of a model can be analyzed by determining which mechanism a model uses for each category. Table 3 provides an overview of attention models found in the liter-ature with a corresponding analysis based on the attention mechanisms the models implement. Secondly, we discuss various techniques for evaluating the performance of attention models. The performance of attention models can be evaluated using extrinsic orintrinsic performance measures, which are discussed in Subsections 4.1 and 4.2, respectively. 4.1 Extrinsic Evaluation In general, the performance of an attention model is measured using extrinsic performance measures . For example, performance measures typically used in the field of natural language processing are the BLEU , METEOR , and Perplexity metrics. In the field of audio processing, the Word Error Rate and Phoneme Error Rate are generally employed. For general classification tasks, error rates, precision, and recall are generally used. For computer vision tasks, the PSNR , SSIM , or IoU metrics are used. Using these performance measures, an attention model can either be compared to other state-of-the-art models, or an ablation study can be performed. If possible, the importance of the attention mechanism can be tested by replacing it with another mechanism and observing whether the overall performance of the model decreases , . An example of this is replacing the weighted average used to produce the context vector with a simple unweighted average and observing whether there is a decrease in overall model performance . This ablation method can be used to evaluate whether the attention weights can actually distinguish important from irrelevant information. 4.2 Intrinsic Evaluation Attention models can also be evaluated using attentionspecific intrinsic performance measures . In , the attention weights are formally evaluated via the Alignment Error Rate (AER) to measure the accuracy of the attention weights with respect to annotated attention vectors. incorporates this idea into an attention model by supervising the attention mechanism using gold attention vectors. A joint loss function consisting of the regular task-specific loss and the attention weights loss function is constructed for this purpose. The gold attention vectors are based on annotated text data sets where keywords are hand-labelled. However, since attention is inspired by human attention, one could evaluate attention models by comparing them to the attention behaviour of humans. 4.2.1 Evaluation via Human Attention In , the concept of attention correctness is proposed, which is a quantitative intrinsic performance metric that evaluates the quality of the attention mechanism based on actual human attention behaviour. Firstly, the calculation of this metric requires data that includes the attention behaviour of a human. For example, a data set containing images with the corresponding regions that a human focuses on when performing a certain task, such as image captioning. The collection of regions focused on by the human is referred to as the ground truth region. Suppose an attention model attends to the nffeature vectors f1,...,fnfRdf. Feature vector ficorresponds to region Riof the given 16 TABLE 3 Attention models analyzed based on the proposed taxonomy. A plus sign (+) between two mechanisms indicates that both techniques were combined in the same model, while a comma (,) indicates that both mechanisms were tested in the same paper, but not necessarily as a combination in the same model. Feature-Related General Query-Related Multiplicity Levels Representations Scoring Alignment Dimensionality Type Multiplicity Bahdanau et al. Singular Single-Level Single-Representational Additive Global Single-Dimensional Basic Singular Luong et al. Singular Single-Level Single-Representational Multiplicative, LocationGlobal, Local Single-Dimensional Basic Singular Xu et al. Singular Single-Level Single-Representational Additive Soft, Hard Single-Dimensional Basic Singular Lu et al. Parallel Co-attentionHierarchical Single-Representational Additive Global Single-Dimensional Specialized Singular Yang et al. Singular Hierarchical Single-Representational Additive Global Single-Dimensional Self-Attentive Singular Li et al. Singular Hierarchical Single-Representational Additive Global Single-Dimensional Self-Attentive Singular Vaswani et al. Singular Single-Level Single-Representational ScaledMultiplicativeGlobal Single-Dimensional Self-Attentive + BasicMulti-Head + Multi-Hop Wallaart and Frasincar Rotatory Single-Level Single-Representational Activated GeneralGlobal Single-Dimensional Specialized Multi-Hop Kiela et al. Singular Single-Level Multi-Representational Additive Global Single-Dimensional Self-Attentive Singular Shen et al. Singular Single-Level Single-Representational Additive Global Multi-Dimensional Self-Attentive Singular Zhang et al. Singular Single-Level Single-Representational Multiplicative Global Single-Dimensional Self-Attentive Singular Li et al. Parallel Co-attentionSingle-Level Single-Representational ScaledMultiplicativeGlobal Single-Dimensional Self-Attentive + SpecializedSingular Yu et al. Parallel Co-attentionSingle-Level Single-Representational Multiplicative Global Single-Dimensional Self-Attentive + SpecializedMulti-Head Wang et al. Parallel Co-attentionSingle-Level Single-Representational Additive Reinforced Single-Dimensional Specialized Singular Oktay et al. Singular Single-Level Single-Representational Additive Global Multi-Dimensional Self-Attentive + SpecializedSingular Winata et al. Singular Single-Level Multi-Representational Additive Global Single-Dimensional Self-Attentive Multi-Head Wang et al. Singular Single-Level Single-Representational Multiplicative Global Single-Dimensional Self-Attentive Capsule-Based image, for i= 1,...,n f. We define the set Gas the set of regions that belong to the ground truth region, such that RiGifRiis part of the ground truth region. The attention model calculates the attention weights a1,...,a nfR1via the usual attention process. The Attention Correctness ( AC) metric can then be calculated using (46). AC 11= i:RiGai 11. (46) Thus, this metric is equal to the sum of the attention weights for the ground truth regions. Since the attention weights sum up to 1 due to, for example, a softmax alignment function, the AC value will be a value between 0 and 1. If the model attends to only the ground truth regions, then AC is equal to 1, and if the attention model does not attend to any of the ground truth regions, AC will be equal to 0. In , a rank correlation metric is used to compare the generated attention weights to the attention behaviour of humans. The conclusion of this work is that attention maps generated by standard attention models generally do not correspond to human attention. Attention models often focus on much larger regions or multiple small non-adjacent regions. As such, a technique to improve attention models is to allow the model to learn from human attention patterns via a joint loss of the regular loss function and an attention weight loss function based on the human gaze behaviour, similarly to how annotated attention vectors are used in to supervise the attention mechanism. proposes to use human attention data to supervise the attention mechanism in such a manner. Similarly, a state-of-the-art video captioning model is proposed in that learns from human gaze data to improve the attention mechanism. 4.2.2 Manual Evaluation A method that is often used to evaluate attention models is the manual inspection of attention weights. As previously mentioned, the attention weights are a direct indication of which parts of the data the attention model finds mostimportant. Therefore, observing which parts of the inputs the model focuses on can be helpful in determining if the model is behaving correctly. This allows for some interpretation of the behaviour of models that are typically known to be black boxes. However, rather than checking if the model focuses on the most important parts of the data, some use the attention weights to determine which parts of the data are most important. This would imply that attention models provide a type of explanation, which is a subject of contention among researchers. Particularly, in , extensive experiments are conducted for various natural language processing tasks to investigate the relation between attention weights and important information to determine whether attention can actually provide meaningful explanations. In this paper titled Attention is not Explanation, it is found that attention weights do not tend to correlate with important features. Additionally, the authors are able to replace the produced attention weights with completely different values while keeping the model output the same. These so-called adversarial attention distributions show that an attention model may focus on completely different information and still come to the same conclusions, which makes interpretation difficult. Yet, in another paper titled Attention is not not Explanation , the claim that attention is not explanation is questioned by challenging the assumptions of the previous work. It is found that the adversarial attention distributions do not perform as reliably well as the learned attention weights, indicating that it was not proved that attention is not viable for explanation. In general, the conclusion regarding the interpretability of attention models is that researchers must be extremely careful when drawing conclusions based on attention patterns. For example, problems with an attention model can be diagnosed via the attention weights if the model is found to focus on the incorrect parts of the data, if such information is available. Yet, conversely, attention weights may only be used to obtain plausible explanations for why certain parts of the data are focused on, rather than concluding that those 17 parts are significant to the problem . However, one should still be cautious as the viability of such approaches can depend on the model architecture . 5 C ONCLUSION In this survey, we have provided an overview of recent research on attention models in deep learning. Attention mechanisms have been a prominent development for deep learning models as they have shown to improve model performance significantly, producing state-of-the-art results for various tasks in several fields of research. We have presented a comprehensive taxonomy that can be used to categorize and explain the diverse number of attention mechanisms proposed in the literature. The organization of the taxonomy was motivated based on the structure of a task model that consists of a feature model, an attention model, a query model, and an output model. Furthermore, the attention mechanisms have been discussed using a framework based on queries, keys, and values. Last, we have shown how one can use extrinsic and intrinsic measures to evaluate the performance of attention models, and how one can use the taxonomy to analyze the structure of attention models. The attention mechanism is typically relatively simple to understand and implement and can lead to significant improvements in performance. As such, it is no surprise that this is a highly active field of research with new attention mechanisms and models being developed constantly. Not only are new mechanisms consistently being developed, but there is also still ample opportunity for the exploration of existing mechanisms for new tasks. For example, multidimensional attention is a technique that shows promising results and is general enough to be implemented in almost any attention model. However, it has not seen much application in current works. Similarly, multi-head attention is a technique that can be efficiently parallelized and implemented in practically any attention model. Yet, it is mostly seen only in Transformer-based architectures. Lastly, similarly to how combines rotatory attention with multi-hop attention, combining multi-dimensional attention, multi-head attention, capsule-based attention, or any of the other mechanisms presented in this survey may produce new state-of-the-art results for the various fields of research mentioned in this survey. This survey has mainly focused on attention mechanisms for supervised models, since these comprise the largest proportion of the attention models in the literature. In comparison to the total amount of research that has been done on attention models, research on attention models for semi-supervised learning , or unsupervised learning , has received limited attention and has only become active recently. Attention may play a more significant role for such tasks in the future as obtaining large amounts of labeled data is a difficult task. Yet, as larger and more detailed data sets become available, the research on attention models can advance even further. For example, we mentioned the fact that attention weights can be trained directly based on hand-annotated data or actual human attention behaviour , . As new data sets are released, future research may focus on developing attention models that can incorporate those types of data.While attention is intuitively easy to understand, there still is a substantial lack of theoretical support for attention. As such, we expect more theoretical studies to additionally contribute to the understanding of the attention mechanisms in complex deep learning systems. Nevertheless, the practical advantages of attention models are clear. Since attention models provide significant performance improvements in a variety of fields, and as there are ample opportunities for more advancements, we foresee that these models will still receive significant attention in the time to come. REFERENCES H. Larochelle and G. E. Hinton, Learning to combine foveal glimpses with a third-order Boltzmann machine, in 24th Annual Conference in Neural Information Processing Systems (NIPS 2010) . Curran Associates, Inc., 2010, pp. 12431251. V . Mnih, N. Heess, A. Graves, and k. kavukcuoglu, Recurrent models of visual attention, in 27th Annual Conference on Neural Information Processing Systems (NIPS 2014) . Curran Associates, Inc., 2014, pp. 22042212. D. Bahdanau, K. Cho, and Y. Bengio, Neural machine translation by jointly learning to align and translate, in 3rd International Conference on Learning Representation (ICLR 2015) , 2015. T. Luong, H. Pham, and C. D. Manning, Effective approaches to attention-based neural machine translation, in 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015) . ACL, 2015, pp. 14121421. Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, Hierarchical attention networks for document classification, in 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT 2016) . ACL, 2016, pp. 14801489. Y. Wang, M. Huang, X. Zhu, and L. Zhao, Attention-based LSTM for aspect-level sentiment classification, in 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016) . ACL, 2016, pp. 606615. P . Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, Bottom-up and top-down attention for image captioning and visual question answering, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018) , 2018, pp. 60776086. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, Show, attend and tell: Neural image caption generation with visual attention, in 32nd International Conference on Machine Learning (ICML 2015) , vol. 37. PMLR, 2015, pp. 20482057. Y. Ma, H. Peng, and E. Cambria, Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM, in 32nd AAAI Conference on Artificial Intelligence (AAAI 2018) . AAAI Press, 2018, pp. 58765883. J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, Attention-based models for speech recognition, in 28th Annual Conference on Neural Information Processing Systems (NIPS 2015) . Curran Associates, Inc., 2015, pp. 577585. D. Bahdanau, J. Chorowski, D. Serdyuk, P . Brakel, and Y. Bengio, End-to-end attention-based large vocabulary speech recognition, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016) . IEEE Signal Processing Society, 2016, pp. 49454949. S. Kim, T. Hori, and S. Watanabe, Joint CTC-attention based end-to-end speech recognition using multi-task learning, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017) . IEEE Signal Processing Society, 2017, pp. 48354839. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, Attention is all you need, in 31st Annual Conference on Neural Information Processing Systems (NIPS 2017) . Curran Associates, Inc., 2017, pp. 5998 6008. K. Cho, B. van Merri enboer, D. Bahdanau, and Y. Bengio, On the properties of neural machine translation: Encoderdecoder approaches, in 8th Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST 2014) . ACL, 2014, pp. 103111. 18 N. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, A. Ku, and D. Tran, Image Transformer, in 35th International Conference on Machine Learning (ICML 2018) , vol. 80. PMLR, 2018, pp. 4055 4064. L. Zhou, Y. Zhou, J. J. Corso, R. Socher, and C. Xiong, End-toend dense video captioning with masked transformer, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018) . IEEE Computer Society, 2018, pp. 87398748. F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P . Jiang, BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer, in 28th ACM International Conference on Information and Knowledge Management (CIKM 2019) . ACM, 2019, p. 14411450. F. Wang and D. M. J. Tax, Survey on the attention based RNN model and its applications in computer vision, arXiv:1601.06823 , 2016. J. B. Lee, R. A. Rossi, S. Kim, N. K. Ahmed, and E. Koh, Attention models in graphs: A survey, ACM Transitions on Knowledge Discovery from Data , vol. 13, pp. 62:162:25, 2019. S. Chaudhari, V . Mithal, G. Polatkan, and R. Ramanath, An attentive survey of attention models, ACM Transactions on Intelligent Systems and Technology , vol. 12, no. 5, pp. 132, 2021. D. Hu, An introductory survey on attention mechanisms in NLP problems, in Proceedings of the 2019 Intelligent Systems Conference (IntelliSys 2019) , ser. AISC, vol. 1038. Springer, 2020, pp. 432448. A. Galassi, M. Lippi, and P . Torroni, Attention, please! a critical review of neural attention models in natural language processing, arXiv:1902.02181 , 2019. M. Daniluk, T. Rockt aschel, J. Welbl, and S. Riedel, Frustratingly short attention spans in neural language modeling, in 5th International Conference on Learning Representations (ICLR 2017) , 2017. Y. Xu, Q. Kong, Q. Huang, W. Wang, and M. D. Plumbley, Attention and localization based on a deep convolutional recurrent model for weakly supervised audio tagging, in Proceedings of the 18th Annual Conference of the International Speech Communication Association (Interspeech 2017) . ISCA, 2017, pp. 30833087. C. Yu, K. S. Barsim, Q. Kong, and B. Yang, Multi-level attention model for weakly supervised audio classification, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE 2018) , 2018, pp. 188192. S. Sharma, R. Kiros, and R. Salakhutdinov, Action recognition using visual attention, in Proceedings of the 4th International Conference on Learning Representations Workshop (ICLR 2016) , 2016. L. Gao, Z. Guo, H. Zhang, X. Xu, and H. T. Shen, Video captioning with attention-based LSTM and semantic consistency, IEEE Transactions on Multimedia , vol. 19, no. 9, pp. 20452055, 2017. H. Ying, F. Zhuang, F. Zhang, Y. Liu, G. Xu, X. Xie, H. Xiong, and J. Wu, Sequential recommender system based on hierarchical attention networks, in 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) . IJCAI, 2018, pp. 39263932. H. Song, D. Rajan, J. Thiagarajan, and A. Spanias, Attend and diagnose: Clinical time series analysis using attention models, in 32nd AAAI Conference on Artificial Intelligence (AAAI 2018) . AAAI Press, 2018, pp. 40914098. D. T. Tran, A. Iosifidis, J. Kanniainen, and M. Gabbouj, Temporal attention-augmented bilinear network for financial time-series data analysis, IEEE Transactions on Neural Networks and Learning Systems , vol. 30, no. 5, pp. 14071418, 2019. P . Veli ckovi c, G. Cucurull, A. Casanova, A. Romero, P . Lio, and Y. Bengio, Graph attention networks, in 6th International Conference on Learning Representations (ICLR 2018) , 2018. J. Lu, J. Yang, D. Batra, and D. Parikh, Hierarchical questionimage co-attention for visual question answering, in 30th Annual Conference on Neural Information Processing Systems (NIPS 2016) . Curran Associates, Inc., 2016, pp. 289297. F. Fan, Y. Feng, and D. Zhao, Multi-grained attention network for aspect-level sentiment classification, in 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018) . ACL, 2018, pp. 34333442. D. Ma, S. Li, X. Zhang, and H. Wang, Interactive attention networks for aspect-level sentiment classification, in 26th International Joint Conference on Artificial Intelligence (IJCAI 2017) . IJCAI, 2017, pp. 40684074. M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi, Bidirectional attention flow for machine comprehension, in 4th International Conference on Learning Representations (ICLR 2016) , 2016. S. Zheng and R. Xia, Left-center-right separated neural network for aspect-based sentiment analysis with rotatory attention, arXiv:1802.00892 , 2018. B. Jing, P . Xie, and E. Xing, On the automatic generation of medical imaging reports, in 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) . ACL, 2018, pp. 25772586. J. Gao, X. Wang, Y. Wang, Z. Yang, J. Gao, J. Wang, W. Tang, and X. Xie, CAMP: Co-attention memory networks for diagnosis prediction in healthcare, in 2019 IEEE International Conference on Data Mining (ICDM 2019) . IEEE, 2019, pp. 10361041. Y. Tay, A. T. Luu, and S. C. Hui, Multi-pointer co-attention networks for recommendation, in 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2018) . ACM, 2018, pp. 23092318. S. Liu, Z. Chen, H. Liu, and X. Hu, User-video co-attention network for personalized micro-video recommendation, in 2019 World Wide Web Conference (WWW 2019) . ACM, 2019, pp. 3020 3026. M. Tu, G. Wang, J. Huang, Y. Tang, X. He, and B. Zhou, Multihop reading comprehension across multiple documents by reasoning over heterogeneous graphs, in 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) . Association for Computational Linguistics, 2019, pp. 27042713. Y.-J. Lu and C.-T. Li, GCAN: Graph-aware co-attention networks for explainable fake news detection on social media, in 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020) . ACL, 2020, pp. 505514. O. Wallaart and F. Frasincar, A hybrid approach for aspectbased sentiment analysis using a lexicalized domain ontology and attentional neural models, in 16th Extended Semantic Web Conference (ESWC 2019) , ser. LNCS, vol. 11503. Springer, 2019, pp. 363378. S. Zhao and Z. Zhang, Attention-via-attention neural machine translation, in 32nd AAAI Conference on Artificial Intelligence (AAAI 2018) . AAAI Press, 2018, pp. 563570. L. Wu, L. Chen, R. Hong, Y. Fu, X. Xie, and M. Wang, A hierarchical attention model for social contextual image recommendation, IEEE Transactions on Knowledge and Data Engineering , 2019. Y. Wang, S. Wang, J. Tang, N. OHare, Y. Chang, and B. Li, Hierarchical attention network for action recognition in videos, arXiv:1607.06416 , 2016. Z. Li, Y. Wei, Y. Zhang, and Q. Yang, Hierarchical attention transfer network for cross-domain sentiment classification, in 32nd AAAI Conference on Artificial Intelligence (AAAI 2018) . AAAI Press, 2018, pp. 58525859. C. Xing, Y. Wu, W. Wu, Y. Huang, and M. Zhou, Hierarchical recurrent attention network for response generation, in 32nd AAAI Conference on Artificial Intelligence (AAAI 2018) . AAAI Press, 2018, pp. 56105617. V . A. Sindagi and V . M. Patel, HA-CCN: Hierarchical attentionbased crowd counting network, IEEE Transactions on Image Processing , vol. 29, pp. 323335, 2019. D. Kiela, C. Wang, and K. Cho, Dynamic meta-embeddings for improved sentence representations, in 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018) . ACL, 2018, pp. 14661477. S. Maharjan, M. Montes, F. A. Gonz alez, and T. Solorio, A genre-aware attention model to improve the likability prediction of books, in 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018) . ACL, 2018, pp. 33813391. G. I. Winata, Z. Lin, and P . Fung, Learning multilingual metaembeddings for code-switching named entity recognition, in 4th Workshop on Representation Learning for NLP (RepL4NLP 2019) . ACL, 2019, pp. 181186. R. Jin, L. Lu, J. Lee, and A. Usman, Multi-representational convolutional neural networks for text classification, Computational Intelligence , vol. 35, no. 3, pp. 599609, 2019. A. Sordoni, P . Bachman, A. Trischler, and Y. Bengio, Iterative alternating neural attention for machine reading, arXiv:1606.02245 , 2016. A. Graves, G. Wayne, and I. Danihelka, Neural Turing machines, arXiv:1410.5401 , 2014. D. Britz, A. Goldie, M.-T. Luong, and Q. Le, Massive exploration of neural machine translation architectures, in 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017) . ACL, 2017, pp. 14421451. 19 R. J. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning, Machine Learning , vol. 8, no. 3, pp. 229256, 1992. T. Shen, T. Zhou, G. Long, J. Jiang, S. Wang, and C. Zhang, Reinforced self-attention network: a hybrid of hard and soft attention for sequence modeling, in 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) . IJCAI, 2018, pp. 43454352. M. Malinowski, C. Doersch, A. Santoro, and P . Battaglia, Learning visual question answering by bootstrapping hard attention, in2018 European Conference on Computer Vision (ECCV 2018) , 2018. Y. Liu, W. Wang, Y. Hu, J. Hao, X. Chen, and Y. Gao, Multiagent game abstraction via graph attention neural network, in 34th AAAI Conference on Artificial Intelligence (AAAI 2020) , vol. 34, no. 05. AAAI Press, 2020, pp. 72117218. S. Seo, J. Huang, H. Yang, and Y. Liu, Interpretable convolutional neural networks with dual local and global attention for review rating prediction, in 11th ACM Conference on Recommender Systems (RecSys 2017) . ACM, 2017, pp. 297305. J. Wang, C. Sun, S. Li, X. Liu, L. Si, M. Zhang, and G. Zhou, Aspect sentiment classification towards question-answering with reinforced bidirectional attention network, in 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) . ACL, 2019, pp. 35483557. M. Jiang, C. Li, J. Kong, Z. Teng, and D. Zhuang, Crosslevel reinforced attention network for person re-identification, Journal of Visual Communication and Image Representation , vol. 69, p. 102775, 2020. T. Shen, T. Zhou, G. Long, J. Jiang, S. Pan, and C. Zhang, DiSAN: Directional self-attention network for RNN/CNN-free language understanding, in 32nd AAAI Conference on Artificial Intelligence (AAAI 2018) . AAAI Press, 2018, pp. 54465455. O. Arshad, I. Gallo, S. Nawaz, and A. Calefati, Aiding intra-text representations with visual context for multimodal named entity recognition, in 2019 International Conference on Document Analysis and Recognition (ICDAR 2019) . IEEE, 2019, pp. 337342. W. Wu, X. Sun, and H. Wang, Question condensing networks for answer selection in community question answering, in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) . ACL, 2018, pp. 17461755. O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, Attention U-Net: Learning where to look for the pancreas, in 1st Medical Imaging with Deep Learning Conference (MIDL 2018) , 2018. R. Tan, J. Sun, B. Su, and G. Liu, Extending the transformer with context and multi-dimensional mechanism for dialogue response generation, in 8th International Conference on Natural Language Processing and Chinese Computing (NLPCC 2019) , ser. LNCS, J. Tang, M.-Y. Kan, D. Zhao, S. Li, and H. Zan, Eds., vol. 11839. Springer, 2019, pp. 189199. L. Chen, B. Lv, C. Wang, S. Zhu, B. Tan, and K. Yu, Schemaguided multi-domain dialogue state tracking with graph attention neural networks, in 34th AAAI Conference on Artificial Intelligence (AAAI 2020) , vol. 34, no. 05. AAAI Press, 2020, pp. 75217528. H. Wang, G. Liu, A. Liu, Z. Li, and K. Zheng, Dmran: A hierarchical fine-grained attention-based network for recommendation, in 28th International Joint Conference on Artificial Intelligence (IJCAI 2019) . Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and Y. Bengio, A structured self-attentive sentence embedding, in5th International Conference on Learning Representations (ICLR 2017) , 2017. J. L. Ba, J. R. Kiros, and G. E. Hinton, Layer normalization, arXiv:1607.06450 , 2016. H. Zhao, J. Jia, and V . Koltun, Exploring self-attention for image recognition, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) , 2020, pp. 10 07610 085. H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, Selfattention generative adversarial networks, in 36th International Conference on Machine Learning (ICML 2019) , ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 2019, pp. 73547363. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarial nets, in 27th Annual Conference on Neural InformationProcessing Systems (NIPS 2014) . Curran Associates, Inc., 2014, pp. 26722680. A. Sinha and J. Dolz, Multi-scale self-guided attention for medical image segmentation, IEEE Journal of Biomedical and Health Informatics , vol. 25, no. 1, pp. 121130, 2021. J. Fajtl, H. S. Sokeh, V . Argyriou, D. Monekosso, and P . Remagnino, Summarizing videos with attention, in 2018 Asian Conference on Computer Vision (ACCV 2018) , ser. LNCS, vol. 11367. Springer, 2018, pp. 3954. J. Salazar, K. Kirchhoff, and Z. Huang, Self-attention networks for connectionist temporal classification in speech recognition, in2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019) . IEEE, 2019, pp. 71157119. T. Afouras, J. S. Chung, A. Senior, O. Vinyals, and A. Zisserman, Deep audio-visual speech recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence , pp. 11, 2018. S. Zhang, Y. Tay, L. Yao, and A. Sun, Next item recommendation with self-attention, arXiv preprint arXiv:1808.06414 , 2018. G. Letarte, F. Paradis, P . Gigu `ere, and F. Laviolette, Importance of self-attention for sentiment analysis, in 2018 Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP 2018) . ACL, 2018, pp. 267275. A. Sankar, Y. Wu, L. Gou, W. Zhang, and H. Yang, Dysat: Deep neural representation learning on dynamic graphs via selfattention networks, in 13th International Conference on Web Search and Data Mining (WSDM 2020) , 2020, pp. 519527. P . Veli ckovi c, G. Cucurull, A. Casanova, A. Romero, P . Li `o, and Y. Bengio, Graph attention networks, in 5th International Conference on Learning Representations (ICLR 2017) , 2017. S. Iida, R. Kimura, H. Cui, P .-H. Hung, T. Utsuro, and M. Nagata, Attention over heads: A multi-hop attention for neural machine translation, in 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop (ACL-SRW 2019) . ACL, 2019, pp. 217222. N. K. Tran and C. Niedere ee, Multihop attention networks for question answer matching, in 41st ACM SIGIR International Conference on Research & Development in Information Retrieval (SIGIR 2018) . ACM, 2018, pp. 325334. Y. Gong and S. R. Bowman, Ruminating reader: Reasoning with gated multi-hop attention, in 5th International Conference on Learning Representation (ICLR 2017) , 2017. S. Yoon, S. Byun, S. Dey, and K. Jung, Speech emotion recognition using multi-hop attention mechanism, in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019) . IEEE, 2019, pp. 28222826. Z. Yang, X. He, J. Gao, L. Deng, and A. Smola, Stacked attention networks for image question answering, in 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2016) , 2016, pp. 2129. Y. Wang, A. Sun, J. Han, Y. Liu, and X. Zhu, Sentiment analysis by capsules, in 2018 World Wide Web Conference (WWW 2018) . ACM, 2018, p. 11651174. S. Sabour, N. Frosst, and G. E. Hinton, Dynamic routing between capsules, in 31st Annual Conference on Neural Information Processing Systems (NIPS 2017) . Curran Associates, Inc., 2017, p. 38593869. M. India, P . Safari, and J. Hernando, Self multi-head attention for speaker recognition, in Proceedings of the 20th Annual Conference of the International Speech Communication Association (Interspeech 2019) . ISCA, 2019, pp. 28222826. C. Wu, F. Wu, S. Ge, T. Qi, Y. Huang, and X. Xie, Neural news recommendation with multi-head self-attention, in 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019) . ACL, 2019, pp. 63896394. Y. Wang, W. Chen, D. Pi, and L. Yue, Adversarially regularized medication recommendation model with multi-hop memory network, Knowledge and Information Systems , vol. 63, no. 1, pp. 125 142, 2021. M. Cornia, M. Stefanini, L. Baraldi, and R. Cucchiara, Meshedmemory transformer for image captioning, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) , 2020, pp. 10 57810 587. J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, TransUnet: Transformers make strong encoders for medical image segmentation, arXiv preprint arXiv:2102.04306 , 2021. 20 P . Zhong, D. Wang, and C. Miao, Knowledge-enriched transformer for emotion detection in textual conversations, in 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019) . ACL, 2019, pp. 165176. Y. Zhou, R. Ji, J. Su, X. Sun, and W. Chen, Dynamic capsule attention for visual question answering, in 33rd AAAI Conference on Artificial Intelligence (AAAI 2019) , vol. 33, no. 01. AAAI Press, 2019, pp. 93249331. Y. Wang, A. Sun, M. Huang, and X. Zhu, Aspect-level sentiment analysis using AS-capsules, in The World Wide Web Conference , 2019, pp. 20332044. Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. Le, and R. Salakhutdinov, Transformer-XL: Attentive language models beyond a fixedlength context, in 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) . ACL, 2019, pp. 29782988. N. Kitaev, . Kaiser, and A. Levskaya, Reformer: The efficient Transformer, in 8th International Conference on Learning Representations (ICLR 2020) , 2020. S. Wang, B. Li, M. Khabsa, H. Fang, and H. Ma, Linformer: Selfattention with linear complexity, arXiv:2006.04768 , 2020. Z. Wu, Z. Liu, J. Lin, Y. Lin, and S. Han, Lite transformer with long-short range attention, in 8th International Conference on Learning Representations (ICLR 2020) , 2020. Y. Tay, D. Bahri, D. Metzler, D.-C. Juan, Z. Zhao, and C. Zheng, Synthesizer: Rethinking self-attention for transformer models, inProceedings of the 38th International Conference on Machine Learning (ICML 2021) , vol. 139. PMLR, 2021, pp. 10 18310 192. Y. Tay, M. Dehghani, D. Bahri, and D. Metzler, Efficient transformers: A survey, arXiv:2009.06732 , 2020. X. Li, J. Song, L. Gao, X. Liu, W. Huang, X. He, and C. Gan, Beyond RNNs: Positional self-attention with co-attention for video question answering, in 33rd AAAI Conference on Artificial Intelligence (AAAI 2019) , vol. 33. AAAI Press, 2019, pp. 8658 8665. A. W. Yu, D. Dohan, M.-T. Luong, R. Zhao, K. Chen, M. Norouzi, and Q. V . Le, QANet: Combining local convolution with global self-attention for reading comprehension, in 6th International Conference on Learning Representations (ICLR 2018) , 2018. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, BLEU: a method for automatic evaluation of machine translation, in 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002) . ACL, 2002, pp. 311318. S. Banerjee and A. Lavie, METEOR: An automatic metric for MT evaluation with improved correlation with human judgments, in2005 Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization . ACL, 2005, pp. 6572. R. Sennrich, Perplexity minimization for translation model domain adaptation in statistical machine translation, in 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012) . ACL, 2012, pp. 539549. M. Popovi c and H. Ney, Word error rates: Decomposition over POS classes and applications for error analysis, in 2nd Workshop on Statistical Machine Translation (WMT 2007) . ACL, 2007, pp. 4855. P . Schwarz, P . Mat ejka, and J. Cernock `y, Towards lower error rates in phoneme recognition, in 7th International Conference on Text, Speech and Dialogue (TSD 2004) , ser. LNCS, vol. 3206. Springer, 2004, pp. 465472. D. S. Turaga, Y. Chen, and J. Caviedes, No reference PSNR estimation for compressed pictures, Signal Processing: Image Communication , vol. 19, no. 2, pp. 173184, 2004. P . Ndajah, H. Kikuchi, M. Yukawa, H. Watanabe, and S. Muramatsu, SSIM image quality metric for denoised images, in 3rd WSEAS International Conference on Visualization, Imaging and Simulation (VIS 2010) . WSEAS, 2010, pp. 5358. M. A. Rahman and Y. Wang, Optimizing intersection-over-union in deep neural networks for image segmentation, in 12th International Symposium on Visual Computing (ISVC 2016) , ser. LNCS, vol. 10072. Springer, 2016, pp. 234244. X. Chen, L. Yao, and Y. Zhang, Residual attention U-net for automated multi-class segmentation of COVID-19 chest CT images, arXiv:2004.05645 , 2020. S. Liu, Y. Chen, K. Liu, and J. Zhao, Exploiting argument information to improve event detection via supervised attention mechanisms, in 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017) . ACL, 2017, pp. 17891798. C. Liu, J. Mao, F. Sha, and A. Yuille, Attention correctness in neural image captioning, in 31st AAAI Conference on Artificial Intelligence (AAAI 2017) . AAAI Press, 2017, pp. 41764182. A. Das, H. Agrawal, L. Zitnick, D. Parikh, and D. Batra, Human attention in visual question answering: Do humans and deep networks look at the same regions? Computer Vision and Image Understanding , vol. 163, pp. 90 100, 2017. Y. Yu, J. Choi, Y. Kim, K. Yoo, S.-H. Lee, and G. Kim, Supervising neural attention models for video captioning by human gaze data, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) . IEEE Computer Society, 2017. S. Jain and B. C. Wallace, Attention is not explanation, in 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT 2019) . ACL, 2019, pp. 35433556. S. Wiegreffe and Y. Pinter, Attention is not not explanation, in 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019) . ACL, 2019, pp. 1120. A. K. Mohankumar, P . Nema, S. Narasimhan, M. M. Khapra, B. V . Srinivasan, and B. Ravindran, Towards transparent and explainable attention models, in 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020) . ACL, 2020, pp. 42064216. K. K. Thekumparampil, C. Wang, S. Oh, and L.-J. Li, Attentionbased graph neural network for semi-supervised learning, arXiv:1803.03735 , 2018. D. Nie, Y. Gao, L. Wang, and D. Shen, ASDNet: Attention based semi-supervised deep networks for medical image segmentation, in 21st International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2018) , ser. LNCS, vol. 11073. Springer, 2018, pp. 370378. Y. Alami Mejjati, C. Richardt, J. Tompkin, D. Cosker, and K. I. Kim, Unsupervised attention-guided image-to-image translation, in 32nd Annual Conference on Neural Information Processing Systems (NIPS 2018) . Curran Associates, Inc., 2018, pp. 3693 3703. R. He, W. S. Lee, H. T. Ng, and D. Dahlmeier, An unsupervised neural attention model for aspect extraction, in 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017) . ACL, 2017, pp. 388397. Gianni Brauwers was born in Spijkenisse, the Netherlands, in 1998. He received the B.S. degree in econometrics and operations research from Erasmus University Rotterdam, Rotterdam, the Netherlands, in 2019, and is currently pursuing the M.S. degree in econometrics and management science at Erasmus University Rotterdam. He is a Research Assistant at Erasmus University Rotterdam, focusing his research on neural attention models and sentiment analysis. Flavius Frasincar was born in Bucharest, Romania, in 1971. He received the M.S. degree in computer science, in 1996, and the M.Phil. degree in computer science, in 1997, from Politehnica University of Bucharest, Bucharest, Romania, and the P .D.Eng. degree in computer science, in 2000, and the Ph.D. degree in computer science, in 2005, from Eindhoven University of Technology, Eindhoven, the Netherlands. Since 2005, he has been an Assistant Professor in computer science at Erasmus University Rotterdam, Rotterdam, the Netherlands. He has published in numerous conferences and journals in the areas of databases, Web information systems, personalization, machine learning, and the Semantic Web. He is a member of the editorial boards of Decision Support Systems, International Journal of Web Engineering and Technology, and Computational Linguistics in the Netherlands Journal, and co-editor-in-chief of the Journal of Web Engineering. Dr. Frasincar is a member of the Association for Computing Machinery.
Integrating-cellular-electron-microscopy-with-mult.pdf
"Leading Edge Review Integrating cellular electron microscopy with multimodal data to explore biolog(...TRUNCATED)
2210.00312.pdf
"Published as a conference paper at ICLR 2023 MULTIMODAL ANALOGICAL REASONING OVER KNOWLEDGE GRAPHS (...TRUNCATED)
2310.12397.pdf
"GPT-4 Doesnt Know Its Wrong: An Analysis of Iterative Prompting for Reasoning Problems Kaya Stechly(...TRUNCATED)
2309.14322.pdf
"Small-scale proxies for large-scale Transformer training instabilities Mitchell Wortsman Peter J. L(...TRUNCATED)
2308.05660.pdf
"Thermodynamic Linear Algebra Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon, Thomas Ahle, Danie(...TRUNCATED)
2309.10150.pdf
"Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions Yevgen Chebot(...TRUNCATED)
2109.01652.pdf
"Published as a conference paper at ICLR 2022 FINETUNED LANGUAGE MODELS AREZERO-SHOT LEARNERS Jason (...TRUNCATED)
1610.06258.pdf
"Using Fast Weights to Attend to the Recent Past Jimmy Ba University of Toronto jimmy@psi.toronto.ed(...TRUNCATED)
sciadv.adn0042.pdf
"Hikichi et al., Sci. Adv. 10, eadn0042 (2024) 1 March 2024 Science Adv AnceS | ReSeAR cH AR ti(...TRUNCATED)

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card