Dataset Viewer
content
string |
---|
A Multi-Modal Contrastive Diffusion Model for Therapeutic Peptide Generation
Yongkang Wang1*, Xuan Liu1*, Feng Huang1, Zhankun Xiong1, Wen Zhang1,2 3†
1College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
2Hubei Key Laboratory of Agricultural Bioinformatics, Huazhong Agricultural University, Wuhan 430070, China
3Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan 430070,China
{wyky481, lx666, fhuang233, xiongzk }@webmail.hzau.edu.cn, zhangwen@mail.hzau.edu.cn
Abstract
Therapeutic peptides represent a unique class of pharmaceu-
tical agents crucial for the treatment of human diseases. Re-
cently, deep generative models have exhibited remarkable
potential for generating therapeutic peptides, but they only
utilize sequence or structure information alone, which hin-
ders the performance in generation. In this study, we pro-
pose a Multi-Modal Contrastive Diffusion model (MMCD),
fusing both sequence and structure modalities in a diffusion
framework to co-generate novel peptide sequences and struc-
tures. Specifically, MMCD constructs the sequence-modal
and structure-modal diffusion models, respectively, and de-
vises a multi-modal contrastive learning strategy with inter-
contrastive and intra-contrastive in each diffusion timestep,
aiming to capture the consistency between two modalities
and boost model performance. The inter-contrastive aligns se-
quences and structures of peptides by maximizing the agree-
ment of their embeddings, while the intra-contrastive differ-
entiates therapeutic and non-therapeutic peptides by max-
imizing the disagreement of their sequence/structure em-
beddings simultaneously. The extensive experiments demon-
strate that MMCD performs better than other state-of-the-
art deep generative methods in generating therapeutic pep-
tides across various metrics, including antimicrobial/anti-
cancer score, diversity, and peptide-docking.
Introduction
Therapeutic peptides, such as antimicrobial and anticancer
peptides, are a unique class of pharmaceutical agents that
comprise short chains of amino acids, exhibiting significant
potential in treating complex human diseases (Jakubczyk
et al. 2020). Traditionally, therapeutic peptides are discov-
ered through a comprehensive screening of sequence spaces
using phage/yeast display technologies (Muttenthaler et al.
2021) or computational tools trained for scoring desired
properties (Lee et al. 2017; Lee, Wong, and Ferguson 2018).
However, the combinatorial space of possible peptides is
vast and only a small solution satisfies therapeutic require-
ments; thus, such screening methods based on brute force
can be time-consuming and costly.
*These authors contributed equally.
†Corresponding authors.
Copyright © 2024, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.In recent years, deep generative models (DGMs) have
demonstrated success in generating images (Liu and Chilton
2022), texts (Iqbal and Qureshi 2022), proteins (Wu et al.
2021), and also gained popularity in peptides. DGMs ex-
plored a more expansive chemical space that affords the
creation of structurally novel peptides, by training neu-
ral networks to approximate the underlying distribution of
observed or known ones (Wan, Kontogiorgos, and Fuente
2022). For example, autoregression-based methods depicted
peptide sequences as sentences composed of residue tokens,
so that the problem can be solved by predicting residue ar-
rangement via recurrent neural networks (RNN) (M ¨uller,
Hiss, and Schneider 2018; Capecchi et al. 2021). Variational
autoencoder (V AE)-based methods generated new peptide
sequences by sampling from the latent space learned through
an encoder-decoder architecture, with or without therapeu-
tic properties as conditional constraints (Ghorbani et al.
2022; Szymczak et al. 2023b). Generative adversarial net-
work (GAN)-based methods trained the generator and dis-
criminator using known data, which compete against each
other to generate new peptides (Tucs et al. 2020; Oort et al.
2021; Lin, Lin, and Lane 2022). Nowadays, diffusion mod-
els (Yang et al. 2023) are prevalent in the generation of pro-
tein sequences and structures, owing to their superior capa-
bility in fitting distributions compared to prior techniques
(Shi et al. 2023; Wu et al. 2022). Likewise, these advanced
diffusion models can be extended to peptide generation and
are expected to deliver favorable outcomes.
Despite the commendable progress of efforts above, they
focused on generating either sequences (i.e., residue ar-
rangements) or structures (i.e., spatial coordinates of back-
bone atoms), ignoring that models fusing information from
both modalities may outperform their uni-modal counter-
parts (Huang et al. 2021). However, how to effectively in-
tegrate the multi-modal information and capture their con-
sistency in peptide generation is a major challenge. Addi-
tionally, compared with generation tasks for images, texts,
and proteins that involve millions of labeled samples, public
datasets for therapeutic peptides typically contain only thou-
sands of sequence or structure profiles, induced by the high
cost of in vitro screening. This limited amount of available
data may result in overfitting (Webster et al. 2019), which
confines generated outcomes within a restricted distribution,
consequently compromising the model’s generalization abil-arXiv:2312.15665v2 [q-bio.QM] 4 Jan 2024ity. How to fully leverage existing peptide data, such as ther-
apeutic and non-therapeutic peptides, to enhance the gener-
ation performance could be regarded as another challenge.
To address these challenges, we propose a Multi-Modal
Contrastive Diffusion model for therapeutic peptide genera-
tion, named MMCD . Specifically, we build a multi-modal
framework that integrates sequence-modal and structure-
modal diffusion models for co-generating residue arrange-
ments and backbone coordinates of peptides. To ensure con-
sistency between the two modalities during the generation
process, we bring in an inter-modal contrastive learning
(Inter-CL) strategy. Inter-CL aligns sequences and struc-
tures, by maximizing the agreement between their embed-
dings derived from the same peptides at each diffusion
timestep. Meanwhile, to avoid the issue of inferior per-
formance caused by limited therapeutic peptide data, we
incorporate substantial known non-therapeutic peptides as
data augmentations to devise an intra-modal CL (Intra-
CL). Intra-CL differentiates therapeutic and non-therapeutic
peptides by maximizing the disagreement of their se-
quence/structure embeddings at each diffusion timestep,
driving the model to precisely fit the distribution of thera-
peutic peptides. Overall, the main contributions of this work
are described as follows:
• We propose a multi-modal diffusion model that inte-
grates both sequence and structure information to co-
generate residue arrangements and backbone coordinates
of therapeutic peptides, whereas previous works focused
only on a single modality.
• We design the inter-intra CL strategy at each diffusion
timestep, which aims to maximize the agreement be-
tween sequence and structure embeddings for aligning
multi-modal information, and maximize the disagree-
ment between therapeutic and non-therapeutic peptides
for boosting model generalization.
• Extensive experiments conducted on peptide datasets
demonstrate that MMCD surpasses the current state-of-
the-art baselines in generating therapeutic peptides, par-
ticularly in terms of antimicrobial/anticancer score, di-
versity, and pathogen-docking.
Related works
Diffusion Model for Protein Generation
Diffusion models (Song and Ermon 2019; Trippe et al. 2023)
devote to learning the noise that adequately destroys the
source data and iteratively remove noise from the prior dis-
tribution to generate new samples, which have emerged as
cutting-edge methods for numerous generation tasks, es-
pecially in proteins (Wu et al. 2022; Cao et al. 2023).
For example, Liu et al. (2023) proposed a textual condi-
tionally guided diffusion model for sequence generation.
Hoogeboom et al. (2022) introduced ProtDiff with an E(3)
equivariant graph neural network to learn a diverse distri-
bution over backbone coordinates of structures. Luo et al.
(2022) considered both the position and orientation of anti-
body residues, achieving an equivariant diffusion model for
sequence-structure co-generation. Despite their success, thefusion of both sequence and structure modalities in diffu-
sion models has not been comprehensively investigated, and
their potential for peptide generation remains unexplored.
To fill this gap, we implement a peptide-oriented diffu-
sion model capable of sequence-structure co-generation and
multi-modal data fusion.
Contrastive Learning
Being popular in self-supervised learning, contrastive learn-
ing (CL) allows models to learn the knowledge behind data
without explicit labels (Xia et al. 2022; Zhu et al. 2023). It
aims to bring an anchor (i.e., data sample) closer to a posi-
tive/similar instance and away from many negative/dissimi-
lar instances, by optimizing their mutual information in the
embedding space. Strategies to yield the positive and neg-
ative pairs often dominate the model performance (Zhang
et al. 2022). For example, Yuan et al. (2021) proposed a
multi-modal CL to align text and image data, which encour-
ages the agreement of corresponding text-image pairs (posi-
tive) to be greater than those of all non-corresponding pairs
(negative). Wu, Luu, and Dong (2022) designed a CL frame-
work that makes full use of semantic relations among text
samples via efficient positive and negative sampling strate-
gies, to mitigate data sparsity for short text modeling. Zhang
et al. (2023b) augmented the protein structures using differ-
ent conformers, and maximized the agreement/disagreement
between the learned embeddings of same/different proteins,
aiming to learn more discriminative representations. How-
ever, these CL strategies have yet to be extended to peptide-
related studies. Therefore, we devise the novel CL strategy
in peptide generation, which serves as an auxiliary objective
to enforce sequence-structure alignment and boost model
performance.
Methodology
In this section, we formulate the peptide co-generation prob-
lem for sequence and structure. Subsequently, we elabo-
rately enumerate the components of our method MMCD,
including the diffusion model for peptide generation and the
multi-modal contrastive learning strategy. The overview of
MMCD is illustrated in Figure 1.
Problem Formulation
A peptide with Nresidues (amino acids) can be repre-
sented as a sequence-structure tuple, denoted as X=
(S, C).S= [si]N
i=1stands for the sequence with si∈
{ACDEFGHIKLMNPQRSTV WY }as the type of
thei-th residue, and C= [ci]N
i=1stands for the structure
withci∈R3∗4as Cartesian coordinates of the i-th residue
(involving four backbone atoms N-C α-C-O). Our goal is to
model the joint distribution of Xbased on the known pep-
tide data, so that sequences (i.e., residue types) and struc-
tures (i.e., residue coordinates) of new peptides can be co-
generated by sampling the distribution.
Diffusion Model for Peptide Generation
The diffusion model defines the Markov chains of processes,
in which latent variables are encoded by a forward diffu-
sion process and decoded by a reverse generative processembedding embeddingInter -contrastive learning...
...
Intra-contrastive learning
Therapeutic Non-therapeuticNoise
prior Peptide
positive pairs
nega tive pairs MLP
MLP EGNNTransformer
ST ...St-1StS0 ...q(St
| St-1)
p(St-1
| St
)
CT ... Ct-1CtC0 ...
q(Ct
| Ct-1)p(Ct-1
| Ct
)
...QVQ RWQ D
SDDAMWV...
...*******
******* ...
...
...
St
St
Ct... ...
... ...Intra-contrastive learningSequences
Structures
Marginal
distribution
Gaussian
noiseMulti -Modal CL
Multi -Modal CLDenoisingAdd noise
Denoising
Add noise
Ct
Figure 1: Overview of the MMCD. MMCD consists of a diffusion model for the peptide sequence-structure co-generation and
multi-modal contrastive learning (CL). The diffusion model involves a forward process ( q(·|·)) for adding noise and a reverse
process ( p(·|·)) for denoising at each timestep t. The reverse process utilizes a transformer encoder (or EGNN) to extract
embeddings from sequences S(or structures C), and a sequence (or structure)-based MLP to map embeddings to the marginal
distribution (or Gaussian) noise. The multi-modal CL includes an Inter-CL and an Intra-CL, which aims to align sequence and
structure embeddings, and differentiate therapeutic and non-therapeutic peptide embeddings.
(Sohl-Dickstein et al. 2015). Let X0= (S0, C0)denotes
the ground-truth peptide and Xt= (St, Ct)fort= 1, ..., T
to be the latent variable at timestep t. The peptide gener-
ation can be modeled as an evolving thermodynamic sys-
tem, where the forward process q(Xt|Xt−1)gradually in-
jects small noise to the data X0until reaching a random
noise distribution at timestep T, and the reverse process
pθ(Xt−1|Xt)with learnable parameters θlearns to denoise
the latent variable Xttowards the data distribution (Luo
et al. 2022).
Diffusion for Peptide Sequence. Following Anand and
Achim (2022), we treat residue types as categorical data and
apply discrete diffusion to sequences, where each residue
type is characterized using one-hot encoding with 20 types.
For the forward process, we add noise to residue types using
the transition matrices with the marginal distribution (Austin
et al. 2021; Vignac et al. 2023) (see details in Appendix A).
For the reverse process, the diffusion trajectory is parame-
terized by the probability q(St−1|St, S0)and a network
ˆpθis defined to predict the probability of S0(Austin et al.2021), that is:
pθ
St−1|St
=Y
1≤i≤Nq(st−1
i|St,ˆS0)·ˆpθ(ˆS0|St)(1)
where st
idenotes the one-hot feature for the i-th residue in
the sequence Sat timestep t, and ˆS0is the predicted proba-
bility of S0. In this work, we design the ˆpθas follows:
ˆpθ
ˆS0|St
=Y
1≤i≤NSoftmax
ˆs0
i| Fs
ht
i
(2)
where ht
iis the input feature of residue iwith the diffusion
noise at time t(the initialization of ht
iis provided in Ap-
pendix A). Fsis a hybrid neural network to predict the noise
of residue types from the marginal distribution, and then the
noise would be removed to compute the probability of ˆs0
i.
Softmax is applied over all residue types. Here, we imple-
mentFswith a transformer encoder and an MLP. The for-
mer learns contextual embeddings of residues from the se-
quence, while the latter maps these embeddings to the noises
of residue types. The learned sequence embedding (defined
asS) involves downstream contrastive learning strategies.Diffusion for Peptide Structure. As the coordinates of
atoms are continuous variables in the 3D space, the forward
process can be defined by adding Gaussian noise to atom
coordinates (Ho, Jain, and Abbeel 2020) (see details in Ap-
pendix A). Following Trippe et al. (2023), the reverse pro-
cess can be defined as:
pθ(ct−1
i|Ct) =N(ct−1
i|µθ(Ct, t), βtI) (3)
µθ
Ct, t
=1√
αt
ct
i−βt
√
1−αtϵθ
Ct, t
(4)
where cirefers to coordinates of the i-th residue in the
structure C;βis the noise rate, formally αt= 1−βt,
αt=Qt
τ=1(1−βτ); the network ϵθis used to gradually
recover the structural data by predicting the Gaussian noise.
In this work, we design the ϵθas follows:
ϵθ(Ct, t) =Fc
rt
i, ht
i
(5)
where rirepresents the coordinates of residue i,hiis the
residue feature, and Fcis a hybrid neural network for pre-
dicting Gaussian noises at timestep t. Similar to sequence
diffusion, we implement Fcwith an equivariant graph neu-
ral network (EGNN) (Satorras, Hoogeboom, and Welling
2021) and an MLP. The former learns spatial embeddings
of residues from the structure (formalized as a 3D graph),
while the latter maps these embeddings to Gaussian noises.
The learned structure embedding (defined as C) also involves
downstream contrastive learning strategies.
Diffusion Objective. Following previous work (Anand
and Achim 2022), we decompose the objective of the pep-
tide diffusion process into sequence loss and structure loss.
For the sequence loss Lt
S, we aim to minimize the cross-
entropy (CE) loss between the actual and predicted residue
types at timestep t:
Lt
S=1
NX
1≤i≤NCE
s0
i,ˆpθ(ˆs0
i|St)
(6)
For the structure loss Lt
C, the objective is to calculate the
mean squared error (MSE) between the predicted noise ϵθ
and standard Gaussian noise ϵat timestep t:
Lt
C=1
NX
1≤i≤N
ϵi−ϵθ(Ct, t)
2(7)
Multi-Modal Contrastive Learning Strategy
When multiple modal data (e.g., sequence and structure) co-
exist, it becomes imperative to capture their consistency to
reduce the heterogeneous differences between modalities,
allowing them to be better fused in generation tasks. Mutual
information (MI) is a straightforward solution to measure
the non-linear dependency (consistency) between variables
(Liu et al. 2023); thus, maximizing MI between modalities
can force them to align and share more crucial information.
Along this line, we bring in contrastive learning (CL) to
align sequences and structures by maximizing their MI in
the embedding space. Specifically, we devise CL strategies
for each diffusion timestep t, as follows:Inter-CL. For a peptide, we define its sequence as the an-
chor, its structure as the positive instance, and the structures
of other peptides in a mini-batch as the negative instances.
Then, we maximize the MI of positive pair (anchor and posi-
tive instance) while minimizing the MI of negative pairs (an-
chor and negative instances), based on embeddings learned
from the networks ˆpθandϵθ. Further, we establish a ’dual’
contrast where the structure acts as an anchor and sequences
are instances. The objective is to minimize the following
InfoNCE-based (Chen et al. 2020) loss function:
Lt
inter=−1
2"
logE
St
i,Ct
i
PM
j=1E
St
i,Ct
j+ logE
Ct
i,St
i
PM
j=1E
Ct
i,St
j#
(8)
where Si/Ciis the sequence/structure embeddings of i-th
peptide in the mini-batch, E(·,·)is the cosine similarity
function with the temperature coefficient to measure the MI
score between two variables, Mis the size of a mini-batch.
In addition, the used diffusion model can only remem-
ber confined generation patterns if therapeutic peptide data
for training is limited, which may lead to inferior general-
ization towards novel peptides. To alleviate this issue, we
introduce contrastive learning to boost the generative capac-
ity of networks ˆpθandϵθby enriching the supervised sig-
nals. However, it is unwise to construct positive instances by
performing data augmentations on therapeutic peptides, as
even minor perturbations may lead to significant functional
changes (Yadav, Kumar, and Singh 2022). Hence, our fo-
cus lies on employing effective strategies for selecting neg-
ative instances. In this regard, we collect non-therapeutic
peptides from public databases to treat them as negative in-
stances, and maximize the disagreement between embed-
dings of therapeutic and non-therapeutic peptides. In detail,
we devise an Intra-CL strategy for each diffusion timestep t,
as follows:
Intra-CL. In a mini-batch, we define the sequence of a
therapeutic peptide ias the anchor, and the sequence of an-
other therapeutic peptide jas the positive instance, while the
sequences of non-therapeutic peptides kare regarded as neg-
ative instances. Similar to Inter-CL, we then maximize/mini-
mize the MI of positive/negative pairs. And we also establish
a structure-oriented contrast by using structures of therapeu-
tic and non-therapeutic peptides to construct the anchor, pos-
itive, and negative instances. The objective is to minimize
the following loss function (Zheng et al. 2021):
Lt
intra=−1
MMX
j=1,j̸=i1yi=yj
logE
St
i,St
j
PM
k=11yi̸=ykE(St
i,St
k)
+ logE
Ct
i,Ct
j
PM
k=11yi̸=ykE(Ct
i,Ct
k)!
(9)
where yirepresents the class of peptide i(i.e., therapeutic or
non-therapeutic). 1yi=yjand1yi̸=ykstand for the indicator
functions, where the output is 1ifyi=yj(peptides iandj
belong to the same class) or yi̸=yk(the types of peptides
iandkare different); otherwise the output is 0. The indica-
tor function filters therapeutic and non-therapeutic peptides
from the data for creating positive and negative pairs.MethodsAMP ACP
Similarity ↓ Instability ↓ Antimicrobial ↑ Similarity ↓ Instability ↓ Anticancer ↑
LSTM-RNN 39.6164 45.0862 0.8550 36.9302 47.0669 0.7336
AMPGAN∗38.3080 51.5236 0.8617 - - -
HydrAMP∗31.0662 59.6340 0.8145 - - -
WAE-PSO∗- - - 41.2524 42.5061 0.7443
DiffAB 28.9849 43.3607 0.8024 31.4220 36.0610 0.6669
SimDiff 25.5385 41.1629 0.8560 28.8245 33.0405 0.7222
MMCD 24.4107 39.9649 0.8810 27.4685 31.7381 0.7604
’*’ represents that the method relies on domain-specific biological knowledge. ’-’ represents that the method is un-
suitable for the current task. For example, AMPGAN and HydrAMP are only designed for the AMP generation.
Table 1: Results for the sequence generation
MethodsAMP ACP
Ramachandran ↑ RMSD ↓ Docking ↑ Ramachandran ↑ RMSD ↓
APPTEST 69.6576 2.7918 1362 67.9826 2.8055
FoldingDiff 72.4681 2.5118 1574 72.0531 2.6033
ProtDiff 71.3078 2.5544 1533 69.7589 2.4960
DiffAB 72.9647 2.3844 1608 71.3225 2.5513
SimDiff 76.1378 2.1004 1682 76.6164 2.4118
MMCD 80.4661 1.8278 1728 78.2157 2.0847
Table 2: Results for the structure generation.
The reason behind the design of Intra-CL is intuitive.
First, the non-therapeutic class naturally implies opposite in-
formation against the therapeutic class, and hence it makes
the model more discriminative. Second, the fashion to max-
imize the disagreement between classes (1) can induce bi-
ases in the embedding distribution of therapeutic peptides,
identifying more potential generation space, and (2) can ex-
plicitly reinforce embedding-class correspondences during
diffusion, maintaining high generation fidelity (Zhu et al.
2022). Further analysis is detailed in the ablation study.
Model Training
The ultimate objective function is the sum of the diffusion
process for sequence and structure generation, along with
the CL tasks for Intra-CL and Inter-CL:
Ltotal=Et∼Uniform(1...T)
α
Lt
S+Lt
C
+ (1−α)
Lt
intra+Lt
inter
(10)
where αrepresents a hyperparameter to balance the contri-
butions of different tasks. The Uniform(1...T) shows the uni-
form distribution for the diffusion timesteps. The implemen-
tation details of MMCD and the sampling process of peptide
generation can be found in Appendix A.
Experiments
Experimental Setups
Datasets. Following previous studies (Thi Phan et al.
2022; Zhang et al. 2023a), we collected therapeutic pep-
tide data from public databases, containing two biologi-
cal types, i.e., antimicrobial peptides (AMP) and anticancer
peptides (ACP). Among these collected peptides, a portion
of them only have 1D sequence information, without 3Dstructure information. Then, we applied Rosetta-based com-
putational tools (Chaudhury, Lyskov, and Gray 2010) to pre-
dict the missing structures based on their sequences. Finally,
we compiled two datasets, one containing 20,129 antimi-
crobial peptides and the other containing 4,381 anticancer
peptides. In addition, we paired an equal number of labeled
non-therapeutic peptides (collected from public databases)
with each of the two datasets, exclusively for the contrastive
learning task.
Baselines. We compared our method with the follow-
ing advanced methods for peptide generation at sequence
and structure levels. For the sequence generation, the
autoregression-based method LSTM-RNN (M ¨uller, Hiss,
and Schneider 2018), the GAN-based method AMPGAN
(Oort et al. 2021), and the V AE-based methods including
WAE-PSO (Yang et al. 2022) and HydrAMP (Szymczak
et al. 2023a) are listed as baselines. For the structure gener-
ation, we took APPTEST (Timmons and Hewage 2021) as a
baseline, which combines the neural network and simulated
annealing algorithm for structure prediction. Moreover, we
extended diffusion-based methods for protein generation to
peptides. The diffusion-based methods for structure genera-
tion (e.g., FoldingDiff (Wu et al. 2022) and ProtDiff (Trippe
et al. 2023)) and the sequence-structure co-design (e.g., Dif-
fAB(Luo et al. 2022) and SimDiff(Zhang et al. 2023b)), are
considered for the comparison separately in the sequence
and structure generation.
Evaluation protocol. Here, we required each model (ours
and baselines) to generate 1,000 new peptides, and then
evaluated the quality of generated peptides with the follow-
ing metrics. For the sequence, similarity score is used toMMCD (w/o Inter -CL) on AMP
MMCD on AMP
MMCD (w/o Intra -CL) on AMP and non -AMP MMCD on AMP and non -AMP(a)
(b)
(a) (b)
average
averageFigure 2: (a) The sample ratio under different sequence lengths in the AMP dataset, where the red line is the average ratio. (b)
The similarity and RMSD scores of MMCD and baselines across different sequence lengths.
quantify how closely the generated sequences match exist-
ing ones, with a lower score indicating higher novelty; insta-
bility score (M ¨uller et al. 2017) indicates the degree of pep-
tide instability; antimicrobial /anticancer score evaluates
the probability of peptides having therapeutic properties.
For the structure, Ramachandran score (Hollingsworth and
Karplus 2010) accesses the reliability of peptide structures;
RMSD score measures the structural similarity between
generated and existing peptides, with a lower score indi-
cating higher authenticity; docking score (Fl ´orez-Castillo
et al. 2020) evaluates the binding degree of antimicro-
bial peptides to bacterial membrane proteins (PDB ID:
6MI7). We only reported the average metrics over all gen-
erated peptides for each method in the experimental re-
sults. Detailed information about the datasets, baselines,
metrics, and implementations can be found in Appendix
B. Our code, data and appendix are available on GitHub
(https://github.com/wyky481l/MMCD)
Experimental Results
Performance comparison. In the results of sequence gen-
eration under two datasets (as shown in Table 1), MMCD ex-
hibited lower similarity and instability scores than all base-
lines, suggesting its good generalization ability in generating
diverse and stable peptides. Meanwhile, MMCD surpassed
all baselines with higher antimicrobial and anticancer scores
across AMP and ACP datasets, highlighting its strong po-
tential for generating therapeutic peptides. Beyond that, we
noticed that diffusion-based baselines (e.g., SimDiff, Dif-
fAB) exhibit higher stability and diversity but lower ther-
apeutic scores compared to baselines that incorporate bio-
logical knowledge (e.g., AMPGAN, HydrAMP, WAE-PSO,
details in Appendix B). By contrast, MMCD introduced bio-
logical knowledge into the diffusion model by designing the
contrastive learning of therapeutic and non-therapeutic pep-
tides, thereby delivering optimality across various metrics.
For the results of structure generation (as shown in Ta-
ble 2), MMCD also outperformed all the baselines and ex-
ceeded the best baselines (DiffAB and SimDiff) by 23.3 %
and 12.9 %in RMSD scores, 10.2 %and 5.6 %in Ramachan-
dran scores, and 7.4 %and 2.7 %in docking scores for AMP
dataset. The higher Ramachandran score and lower RMSD
score of MMCD underlined the reliability of our generated
peptide structures. Especially in peptide docking, we foundthat MMCD shows the best docking score compared with
baselines, which indicates great binding interactions with
the target protein. Overall, MMCD is superior to all base-
lines in both sequence and structure generation of peptides,
and its impressive generative ability holds great promise to
yield high-quality therapeutic peptides.
Performance on different sequence lengths. In our
dataset, sequence lengths of different peptides exhibited sub-
stantial variation, with the number of residues ranging from
5 to 50 (Figure 2-a). We required models to generate 20
new peptides (sequences or structures) at each sequence
length. Note that two methods, AMPGAN and HydrAMP,
were excluded from the comparison because they cannot
generate peptides with fixed lengths. From the generated re-
sults on the AMP dataset (Figure 2-b), MMCD exceeded
the baselines in terms of similarity and RMSD scores at
each sequence length. With the increasing sequence lengths,
there is a general trend of increased similarity and RMSD
scores across all methods. One possible reason for this trend
is that designing longer peptides becomes more complex,
given the more prominent search space involved. Addition-
ally, the scarcity of long-length peptides poses challenges in
accurately estimating the similarity between generated and
known peptides. In summary, these observations supported
that MMCD excels at generating diverse peptides across dif-
ferent lengths, especially shorter ones.
Ablation study
To investigate the necessity of each module in MMCD, we
conducted several comparisons between MMCD with its
variants: (1) MMCD (w/o Inter-CL) that removes the Inter-
CL task, (2) MMCD (w/o Intra-CL) that removes the Intra-
CL task, and (3) MMCD (w/o Inter-CL & Intra-CL) that re-
moves both Inter-CL and Intra-CL tasks. The comparisons
were operated on both AMP and ACP datasets, and the re-
sults are shown in Table 3 and Appendix Table 1. When the
Inter-CL was removed (w/o Inter-CL), we observed a de-
cline in all metrics for peptide sequence and structure gen-
eration, implying the importance of aligning two modalities
via CL. The variant (w/o Intra-CL) results signified that us-
ing the CL to differentiate therapeutic and non-therapeutic
peptides contributes to the generation. As expected, the per-
formance of MMCD dropped significantly after removingMethodsAMP ACP
Similarity ↓Instability ↓Antimicrobial ↑Similarity ↓Instability ↓Anticancer ↑
MMCD (w/o InterCL & IntraCL) 27.4794 42.5359 0.8013 31.2820 34.6888 0.6996
MMCD (w/o IntraCL) 26.6889 41.2631 0.8584 28.9782 33.0268 0.7513
MMCD (w/o InterCL) 24.9079 41.7646 0.8494 28.0143 33.9816 0.7352
MMCD 24.4107 39.9649 0.8810 27.4685 31.7381 0.7604
Table 3: Ablation study on the sequence-level generation task.
MMCD (w/o InterCL) on AMP
MMCD on AMP
MMCD (w/o IntraCL) on AMP and non -AMP MMCD on AMP and non -AMP(a)
(b)
Figure 3: (a) The t-SNE for structure and sequence em-
beddings of therapeutic peptides (AMP data) obtained from
MMCD (w/o Inter-CL) and MMCD. (b) The t-SNE for em-
beddings (including structures and sequences) of therapeutic
(AMP) and non-therapeutic (non-AMP) peptides obtained
from MMCD (w/o Intra-CL) and MMCD.
both Inter-CL and Intra-CL (w/o Inter-CL & Intra-CL).
To better understand the strengths of Inter-CL and Intra-
CL, we performed the t-SNE (Van der Maaten and Hin-
ton 2008) visualization using the learned embeddings of
peptides on the AMP dataset. As illustrated in Figure 3-
a, Inter-CL effectively promoted the alignment of sequence
and structure embeddings, facilitating the shared crucial in-
formation (dashed circle) to be captured during diffusion.
The t-SNE of Intra-CL (Figure 3-b) also revealed that it bet-
ter distinguished therapeutic peptides from non-therapeutic
ones in the embedding distribution. And the resulting dis-
tribution bias may identify more potential generation space,
thus leading to higher quality and diversity of therapeutic
peptides generated by MMCD. Overall, MMCD with all the
modules fulfilled superior performance, and removing any
modules will diminish its generation power.
Peptide-docking analysis
To test the validity of generated peptide structures, we con-
ducted a molecular-docking simulation. Here, a peptide was
randomly selected from the AMP dataset as the reference,and the methods (Figure 4) were employed to generate cor-
responding structures based on the sequence of the reference
peptide (see details in Appendix C). The lipopolysaccharide
on the outer membrane of bacteria (Li, Orlando, and Liao
2019) was selected as the target protein for molecular dock-
ing. Then, we extracted the residues within a 5 ˚A proxim-
ity between peptides (i.e., the reference and generated struc-
tures) and the active pocket of target protein in docking com-
plexes, to visualize their binding interactions (Miller et al.
2021). Of these docking results, all methods yielded a new
structure capable of binding to the target protein, and our
method exhibited the highest docking scores and displayed
binding residues most similar to the reference structure. This
prominent result underscored the reliability and therapeutic
potential of our method for peptide generation.
Reference
DockingMMCD SimDiff
DiffABDocking score = 1754 RMSD = 1.76Docking score = 1726 RMSD = 2.04Docking score = 1690
RMSD = 2.32Docking score = 1597
FoldingDiff
RMSD = 2.45Docking score = 1582
ProtDiff
RMSD = 2.51Docking score = 1551
Figure 4: Docking analysis (interactive visualization be-
tween target protein and peptides) of the reference and gen-
erated structures by MMCD and baselines. Thick lines rep-
resent the residues of peptides, and the thin lines show the
binding residues for protein-peptide complexes.
Conclusion
In this work, we propose a multi-modal contrastive dif-
fusion model for the co-generation of peptide sequences
and structures, named MMCD. MMCD is dedicated to
leveraging a multi-modal contrastive learning strategy to
capture consensus-related and difference-related informa-
tion behind the sequences/structures and therapeutic/non-
therapeutic peptides, enhancing the diffusion model to gen-
erate high-quality therapeutic peptides. The experimental
results unequivocally demonstrate the capability of our
method in co-generating peptide sequence and structure,
surpassing state-of-the-art baseline methods with advanta-
geous performance.Acknowledgments
This work was supported by the National Natural Sci-
ence Foundation of China (62372204, 62072206, 61772381,
62102158); Huazhong Agricultural University Scien-
tific & Technological Self-innovation Foundation; Fun-
damental Research Funds for the Central Universities
(2662021JC008, 2662022JC004). The funders have no role
in study design, data collection, data analysis, data interpre-
tation, or writing of the manuscript.
References
Anand, N.; and Achim, T. 2022. Protein Structure and
Sequence Generation with Equivariant Denoising Diffusion
Probabilistic Models. arxiv:2205.15019.
Austin, J.; Johnson, D. D.; Ho, J.; Tarlow, D.; and van den
Berg, R. 2021. Structured Denoising Diffusion Models in
Discrete State-Spaces. In Advances in Neural Information
Processing Systems , volume 34, 17981–17993. Curran As-
sociates, Inc.
Cao, H.; Tan, C.; Gao, Z.; Xu, Y .; Chen, G.; Heng, P.-A.; and
Li, S. Z. 2023. A Survey on Generative Diffusion Model.
arxiv:2209.02646.
Capecchi, A.; Cai, X.; Personne, H.; K ¨ohler, T.; van Delden,
C.; and Reymond, J.-L. 2021. Machine Learning Designs
Non-Hemolytic Antimicrobial Peptides. Chem Sci , 12(26):
9221–9232.
Chaudhury, S.; Lyskov, S.; and Gray, J. J. 2010. PyRosetta:
A Script-Based Interface for Implementing Molecular Mod-
eling Algorithms Using Rosetta. Bioinformatics , 26(5):
689–691.
Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020.
A simple framework for contrastive learning of visual repre-
sentations. In International conference on machine learning ,
1597–1607. PMLR.
Fl´orez-Castillo, J. M.; Rond ´on-Villareal, P.; Ropero-Vega,
J. L.; Mendoza-Espinel, S. Y .; Moreno-Am ´ezquita, J. A.;
M´endez-Jaimes, K. D.; Farf ´an-Garc ´ıa, A. E.; G ´omez-
Rangel, S. Y .; and G ´omez-Duarte, O. G. 2020. Ib-M6 An-
timicrobial Peptide: Antibacterial Activity against Clinical
Isolates of Escherichia Coli and Molecular Docking. Antibi-
otics , 9(2): 79.
Ghorbani, M.; Prasad, S.; Brooks, B. R.; and Klauda, J. B.
2022. Deep Attention Based Variational Autoencoder for
Antimicrobial Peptide Discovery.
Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion
Probabilistic Models. In Advances in Neural Information
Processing Systems , volume 33, 6840–6851. Curran Asso-
ciates, Inc.
Hollingsworth, S. A.; and Karplus, P. A. 2010. A Fresh Look
at the Ramachandran Plot and the Occurrence of Standard
Structures in Proteins. 1(3-4): 271–283.
Hoogeboom, E.; Satorras, V . G.; Vignac, C.; and Welling,
M. 2022. Equivariant Diffusion for Molecule Generation in
3D. In Proceedings of the 39th International Conference on
Machine Learning , 8867–8887. PMLR.Huang, Y .; Du, C.; Xue, Z.; Chen, X.; Zhao, H.; and Huang,
L. 2021. What makes multi-modal learning better than sin-
gle (provably). Advances in Neural Information Processing
Systems , 34: 10944–10956.
Iqbal, T.; and Qureshi, S. 2022. The Survey: Text Generation
Models in Deep Learning. J King Saud Univ-com , 34(6, Part
A): 2515–2528.
Jakubczyk, A.; Kara ´s, M.; Rybczy ´nska-Tkaczyk, K.;
Zieli ´nska, E.; and Zieli ´nski, D. 2020. Current Trends of
Bioactive Peptides—New Sources and Therapeutic Effect.
Foods , 9(7): 846.
Lee, E. Y .; Lee, M. W.; Fulan, B. M.; Ferguson, A. L.; and
Wong, G. C. L. 2017. What Can Machine Learning Do
for Antimicrobial Peptides, and What Can Antimicrobial
Peptides Do for Machine Learning? Interface Focus , 7(6):
20160153.
Lee, E. Y .; Wong, G. C. L.; and Ferguson, A. L.
2018. Machine Learning-Enabled Discovery and Design of
Membrane-Active Peptides. Bioorgan Med Chem , 26(10):
2708–2718.
Li, Y .; Orlando, B. J.; and Liao, M. 2019. Structural Basis of
Lipopolysaccharide Extraction by the LptB2FGC Complex.
Nature , 567(7749): 486–490.
Lin, E.; Lin, C.-H.; and Lane, H.-Y . 2022. De novo peptide
and protein design using generative adversarial networks:
an update. Journal of Chemical Information and Modeling ,
62(4): 761–774.
Liu, S.; Zhu, Y .; Lu, J.; Xu, Z.; Nie, W.; Gitter, A.; Xiao, C.;
Tang, J.; Guo, H.; and Anandkumar, A. 2023. A Text-guided
Protein Design Framework. arxiv:2302.04611.
Liu, V .; and Chilton, L. B. 2022. Design Guidelines for
Prompt Engineering Text-to-Image Generative Models. In
Proceedings of the 2022 CHI Conference on Human Fac-
tors in Computing Systems , CHI ’22, 1–23. New York, NY ,
USA: Association for Computing Machinery. ISBN 978-1-
4503-9157-3.
Luo, S.; Su, Y .; Peng, X.; Wang, S.; Peng, J.; and Ma, J.
2022. Antigen-Specific Antibody Design and Optimization
with Diffusion-Based Generative Models for Protein Struc-
tures.
Miller, E. B.; Murphy, R. B.; Sindhikara, D.; Borrelli, K. W.;
Grisewood, M. J.; Ranalli, F.; Dixon, S. L.; Jerome, S.;
Boyles, N. A.; Day, T.; Ghanakota, P.; Mondal, S.; Rafi,
S. B.; Troast, D. M.; Abel, R.; and Friesner, R. A. 2021.
Reliable and Accurate Solution to the Induced Fit Docking
Problem for Protein–Ligand Binding. J Chem Theory Com-
put, 17(4): 2630–2639.
M¨uller, A. T.; Gabernet, G.; Hiss, J. A.; and Schneider, G.
2017. modlAMP: Python for Antimicrobial Peptides. Bioin-
formatics , 33(17): 2753–2755.
M¨uller, A. T.; Hiss, J. A.; and Schneider, G. 2018. Recurrent
Neural Network Model for Constructive Peptide Design. J
Chem Inf Model , 58(2): 472–479.
Muttenthaler, M.; King, G. F.; Adams, D. J.; and Alewood,
P. F. 2021. Trends in peptide drug discovery. Nature reviews
Drug discovery , 20(4): 309–325.Oort, C. M. V .; Ferrell, J. B.; Remington, J. M.; Wshah, S.;
and Li, J. 2021. AMPGAN v2: Machine Learning Guided
Design of Antimicrobial Peptides.
Satorras, V . G.; Hoogeboom, E.; and Welling, M. 2021. E(n)
Equivariant Graph Neural Networks. In Proceedings of the
38th International Conference on Machine Learning , 9323–
9332. PMLR.
Shi, C.; Wang, C.; Lu, J.; Zhong, B.; and Tang, J. 2023.
Protein Sequence and Structure Co-Design with Equivariant
Translation. arxiv:2210.08761.
Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and
Ganguli, S. 2015. Deep Unsupervised Learning Using
Nonequilibrium Thermodynamics. In Proceedings of the
32nd International Conference on Machine Learning , 2256–
2265. PMLR.
Song, Y .; and Ermon, S. 2019. Generative Modeling by Es-
timating Gradients of the Data Distribution. In Advances in
Neural Information Processing Systems , volume 32. Curran
Associates, Inc.
Szymczak, P.; Mo ˙zejko, M.; Grzegorzek, T.; Jurczak, R.;
Bauer, M.; Neubauer, D.; Sikora, K.; Michalski, M.; Sroka,
J.; Setny, P.; Kamysz, W.; and Szczurek, E. 2023a. Discov-
ering Highly Potent Antimicrobial Peptides with Deep Gen-
erative Model HydrAMP. Nat Commun , 14(1): 1453.
Szymczak, P.; Mo ˙zejko, M.; Grzegorzek, T.; Jurczak, R.;
Bauer, M.; Neubauer, D.; Sikora, K.; Michalski, M.; Sroka,
J.; Setny, P.; et al. 2023b. Discovering highly potent an-
timicrobial peptides with deep generative model HydrAMP.
Nature Communications , 14(1): 1453.
Thi Phan, L.; Woo Park, H.; Pitti, T.; Madhavan, T.; Jeon,
Y .-J.; and Manavalan, B. 2022. MLACP 2.0: An Updated
Machine Learning Tool for Anticancer Peptide Prediction.
Comput Struct Biotec , 20: 4473–4480.
Timmons, P. B.; and Hewage, C. M. 2021. APPTEST Is
a Novel Protocol for the Automatic Prediction of Peptide
Tertiary Structures. Brief Bioinform , 22(6): bbab308.
Trippe, B. L.; Yim, J.; Tischer, D.; Baker, D.; Broderick,
T.; Barzilay, R.; and Jaakkola, T. 2023. Diffusion Proba-
bilistic Modeling of Protein Backbones in 3D for the Motif-
Scaffolding Problem. arxiv:2206.04119.
Tucs, A.; Tran, D. P.; Yumoto, A.; Ito, Y .; Uzawa, T.; and
Tsuda, K. 2020. Generating Ampicillin-Level Antimicrobial
Peptides with Activity-Aware Generative Adversarial Net-
works. ACS Omega , 5(36): 22847–22851.
Van der Maaten, L.; and Hinton, G. 2008. Visualizing data
using t-SNE. Journal of machine learning research , 9(11).
Vignac, C.; Krawczuk, I.; Siraudin, A.; Wang, B.; Cevher,
V .; and Frossard, P. 2023. DiGress: Discrete Denoising Dif-
fusion for Graph Generation. arxiv:2209.14734.
Wan, F.; Kontogiorgos, H. D.; and Fuente, d. l. N. C. 2022.
Deep Generative Models for Peptide Design. Digital Dis-
covery , 1(3): 195–208.
Webster, R.; Rabin, J.; Simon, L.; and Jurie, F. 2019. Detect-
ing overfitting of deep generative networks via latent recov-
ery. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition , 11273–11282.Wu, K. E.; Yang, K. K.; van den Berg, R.; Zou, J. Y .; Lu,
A. X.; and Amini, A. P. 2022. Protein Structure Generation
via Folding Diffusion. arxiv:2209.15611.
Wu, X.; Luu, A. T.; and Dong, X. 2022. Mitigating Data
Sparsity for Short Text Topic Modeling by Topic-Semantic
Contrastive Learning. In Proceedings of the 2022 Confer-
ence on Empirical Methods in Natural Language Process-
ing, 2748–2760.
Wu, Z.; Johnston, K. E.; Arnold, F. H.; and Yang, K. K.
2021. Protein sequence design with deep generative mod-
els.Current Opinion in Chemical Biology , 65: 18–27.
Xia, C.; Feng, S.-H.; Xia, Y .; Pan, X.; and Shen, H.-B. 2022.
Fast protein structure comparison through effective repre-
sentation learning with contrastive graph neural networks.
PLoS computational biology , 18(3): e1009986.
Yadav, N. S.; Kumar, P.; and Singh, I. 2022. Structural and
functional analysis of protein. In Bioinformatics , 189–206.
Elsevier.
Yang, L.; Yang, G.; Bing, Z.; Tian, Y .; Huang, L.; Niu, Y .;
and Yang, L. 2022. Accelerating the Discovery of Anti-
cancer Peptides Targeting Lung and Breast Cancers with the
Wasserstein Autoencoder Model and PSO Algorithm. Brief
Bioinform , 23(5): bbac320.
Yang, L.; Zhang, Z.; Song, Y .; Hong, S.; Xu, R.; Zhao, Y .;
Zhang, W.; Cui, B.; and Yang, M.-H. 2023. Diffusion Mod-
els: A Comprehensive Survey of Methods and Applications.
arxiv:2209.00796.
Yuan, X.; Lin, Z.; Kuen, J.; Zhang, J.; Wang, Y .; Maire, M.;
Kale, A.; and Faieta, B. 2021. Multimodal contrastive train-
ing for visual representation learning. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition , 6995–7004.
Zhang, H.; Saravanan, K. M.; Wei, Y .; Jiao, Y .; Yang,
Y .; Pan, Y .; Wu, X.; and Zhang, J. Z. H. 2023a. Deep
Learning-Based Bioactive Therapeutic Peptide Generation
and Screening. J Chem Inf Model , 63(3): 835–845.
Zhang, Z.; Xu, M.; Lozano, A.; Chenthamarakshan, V .; Das,
P.; and Tang, J. 2023b. Pre-Training Protein Encoder via
Siamese Sequence-Structure Diffusion Trajectory Predic-
tion. arxiv:2301.12068.
Zhang, Z.; Zhao, Y .; Chen, M.; and He, X. 2022. Label An-
chored Contrastive Learning for Language Understanding.
InProceedings of the 2022 Conference of the North Ameri-
can Chapter of the Association for Computational Linguis-
tics: Human Language Technologies , 1437–1449.
Zheng, M.; Wang, F.; You, S.; Qian, C.; Zhang, C.; Wang,
X.; and Xu, C. 2021. Weakly Supervised Contrastive Learn-
ing. In Proceedings of the IEEE/CVF International Confer-
ence on Computer Vision , 10042–10051.
Zhu, Y .; Wu, Y .; Olszewski, K.; Ren, J.; Tulyakov, S.;
and Yan, Y . 2022. Discrete contrastive diffusion for
cross-modal and conditional generation. arXiv preprint
arXiv:2206.07771 .
Zhu, Y .; Wu, Y .; Olszewski, K.; Ren, J.; Tulyakov, S.; and
Yan, Y . 2023. Discrete Contrastive Diffusion for Cross-
Modal Music and Image Generation. arxiv:2206.07771. |
Inception-v4, Inception-ResNet and
the Impact of Residual Connections on Learning
Christian Szegedy
Google Inc.
1600 Amphitheatre Pkwy, Mountain View, CA
szegedy@google.comSergey Ioffe
sioffe@google.comVincent Vanhoucke
vanhoucke@google.com
Alex Alemi
alemi@google.com
Abstract
Very deep convolutional networks have been central to
the largest advances in image recognition performance in
recent years. One example is the Inception architecture that
has been shown to achieve very good performance at rel-
atively low computational cost. Recently, the introduction
of residual connections in conjunction with a more tradi-
tional architecture has yielded state-of-the-art performance
in the 2015 ILSVRC challenge; its performance was similar
to the latest generation Inception-v3 network. This raises
the question of whether there are any benefit in combining
the Inception architecture with residual connections. Here
we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks
significantly. There is also some evidence of residual Incep-
tion networks outperforming similarly expensive Inception
networks without residual connections by a thin margin. We
also present several new streamlined architectures for both
residual and non-residual Inception networks. These varia-
tions improve the single-frame recognition performance on
the ILSVRC 2012 classification task significantly. We fur-
ther demonstrate how proper activation scaling stabilizes
the training of very wide residual Inception networks. With
an ensemble of three residual and one Inception-v4, we
achieve 3.08% top-5 error on the test set of the ImageNet
classification (CLS) challenge.
1. Introduction
Since the 2012 ImageNet competition [11] winning en-
try by Krizhevsky et al [8], their network “AlexNet” has
been successfully applied to a larger variety of computer
vision tasks, for example to object-detection [4], segmen-
tation [10], human pose estimation [17], video classifica-tion [7], object tracking [18], and superresolution [3]. These
examples are but a few of all the applications to which deep
convolutional networks have been very successfully applied
ever since.
In this work we study the combination of the two most
recent ideas: Residual connections introduced by He et al.
in [5] and the latest revised version of the Inception archi-
tecture [15]. In [5], it is argued that residual connections are
of inherent importance for training very deep architectures.
Since Inception networks tend to be very deep, it is natu-
ral to replace the filter concatenation stage of the Inception
architecture with residual connections. This would allow
Inception to reap all the benefits of the residual approach
while retaining its computational efficiency.
Besides a straightforward integration, we have also stud-
ied whether Inception itself can be made more efficient by
making it deeper and wider. For that purpose, we designed
a new version named Inception-v4 which has a more uni-
form simplified architecture and more inception modules
than Inception-v3. Historically, Inception-v3 had inherited
a lot of the baggage of the earlier incarnations. The techni-
cal constraints chiefly came from the need for partitioning
the model for distributed training using DistBelief [2]. Now,
after migrating our training setup to TensorFlow [1] these
constraints have been lifted, which allowed us to simplify
the architecture significantly. The details of that simplified
architecture are described in Section 3.
In this report, we will compare the two pure Inception
variants, Inception-v3 and v4, with similarly expensive hy-
brid Inception-ResNet versions. Admittedly, those mod-
els were picked in a somewhat ad hoc manner with the
main constraint being that the parameters and computa-
tional complexity of the models should be somewhat similar
to the cost of the non-residual models. In fact we have tested
bigger and wider Inception-ResNet variants and they per-
formed very similarly on the ImageNet classification chal-
1arXiv:1602.07261v2 [cs.CV] 23 Aug 2016lenge [11] dataset.
The last experiment reported here is an evaluation of an
ensemble of all the best performing models presented here.
As it was apparent that both Inception-v4 and Inception-
ResNet-v2 performed similarly well, exceeding state-of-
the art single frame performance on the ImageNet valida-
tion dataset, we wanted to see how a combination of those
pushes the state of the art on this well studied dataset. Sur-
prisingly, we found that gains on the single-frame perfor-
mance do not translate into similarly large gains on ensem-
bled performance. Nonetheless, it still allows us to report
3.1% top-5 error on the validation set with four models en-
sembled setting a new state of the art, to our best knowl-
edge.
In the last section, we study some of the classification
failures and conclude that the ensemble still has not reached
the label noise of the annotations on this dataset and there
is still room for improvement for the predictions.
2. Related Work
Convolutional networks have become popular in large
scale image recognition tasks after Krizhevsky et al. [8].
Some of the next important milestones were Network-in-
network [9] by Lin et al., VGGNet [12] by Simonyan et al.
and GoogLeNet (Inception-v1) [14] by Szegedy et al.
Residual connection were introduced by He et al. in [5]
in which they give convincing theoretical and practical ev-
idence for the advantages of utilizing additive merging of
signals both for image recognition, and especially for object
detection. The authors argue that residual connections are
inherently necessary for training very deep convolutional
models. Our findings do not seem to support this view, at
least for image recognition. However it might require more
measurement points with deeper architectures to understand
the true extent of beneficial aspects offered by residual con-
nections. In the experimental section we demonstrate that
it is not very difficult to train competitive very deep net-
works without utilizing residual connections. However the
use of residual connections seems to improve the training
speed greatly, which is alone a great argument for their use.
The Inception deep convolutional architecture was intro-
duced in [14] and was called GoogLeNet or Inception-v1 in
our exposition. Later the Inception architecture was refined
in various ways, first by the introduction of batch normaliza-
tion [6] (Inception-v2) by Ioffe et al. Later the architecture
was improved by additional factorization ideas in the third
iteration [15] which will be referred to as Inception-v3 in
this report.
Conv +Relu activation
Relu activation Conv Figure 1. Residual connections as introduced in He et al. [5].
Conv +Relu activation
Relu activation 1x1 Conv
Figure 2. Optimized version of ResNet connections by [5] to shield
computation.
3. Architectural Choices
3.1. Pure Inception blocks
Our older Inception models used to be trained in a par-
titioned manner, where each replica was partitioned into a
multiple sub-networks in order to be able to fit the whole
model in memory. However, the Inception architecture is
highly tunable, meaning that there are a lot of possible
changes to the number of filters in the various layers that
do not affect the quality of the fully trained network. In
order to optimize the training speed, we used to tune the
layer sizes carefully in order to balance the computation be-
tween the various model sub-networks. In contrast, with the
introduction of TensorFlow our most recent models can be
trained without partitioning the replicas. This is enabled in
part by recent optimizations of memory used by backprop-
agation, achieved by carefully considering what tensors are
needed for gradient computation and structuring the compu-tation to reduce the number of such tensors. Historically, we
have been relatively conservative about changing the archi-
tectural choices and restricted our experiments to varying
isolated network components while keeping the rest of the
network stable. Not simplifying earlier choices resulted in
networks that looked more complicated that they needed to
be. In our newer experiments, for Inception-v4 we decided
to shed this unnecessary baggage and made uniform choices
for the Inception blocks for each grid size. Plase refer to
Figure 9 for the large scale structure of the Inception-v4 net-
work and Figures 3, 4, 5, 6, 7 and 8 for the detailed struc-
ture of its components. All the convolutions not marked
with “V” in the figures are same-padded meaning that their
output grid matches the size of their input. Convolutions
marked with “V” are valid padded, meaning that input patch
of each unit is fully contained in the previous layer and the
grid size of the output activation map is reduced accord-
ingly.
3.2. Residual Inception Blocks
For the residual versions of the Inception networks, we
use cheaper Inception blocks than the original Inception.
Each Inception block is followed by filter-expansion layer
(11convolution without activation) which is used for
scaling up the dimensionality of the filter bank before the
addition to match the depth of the input. This is needed to
compensate for the dimensionality reduction induced by the
Inception block.
We tried several versions of the residual version of In-
ception. Only two of them are detailed here. The first
one “Inception-ResNet-v1” roughly the computational cost
of Inception-v3, while “Inception-ResNet-v2” matches the
raw cost of the newly introduced Inception-v4 network. See
Figure 15 for the large scale structure of both varianets.
(However, the step time of Inception-v4 proved to be signif-
icantly slower in practice, probably due to the larger number
of layers.)
Another small technical difference between our resid-
ual and non-residual Inception variants is that in the case
of Inception-ResNet, we used batch-normalization only on
top of the traditional layers, but not on top of the summa-
tions. It is reasonable to expect that a thorough use of batch-
normalization should be advantageous, but we wanted to
keep each model replica trainable on a single GPU. It turned
out that the memory footprint of layers with large activa-
tion size was consuming disproportionate amount of GPU-
memory. By omitting the batch-normalization on top of
those layers, we were able to increase the overall number
of Inception blocks substantially. We hope that with bet-
ter utilization of computing resources, making this trade-off
will become unecessary.
3x3 Conv
(32 stride 2 V )
Input
(299x299x3) 3x3 Conv
(32 V) 3x3 Conv
(64)3x3 MaxPool
(stride 2 V) 3x3 Conv
(96 stride 2 V) Filter concat 1x1 Conv
(64)3x3 Conv
(96 V)
1x1 Conv
(64)7x1 Conv
(64)1x7 Conv
(64)Filter concat
3x3 Conv
(96 V) MaxPool
(stride=2 V) 3x3 Conv
(192 V) Filter concat
299x299x3 149x149x32 147x147x32 147x147x64 73x73x160 71x71x192 35x35x384 Figure 3. The schema for stem of the pure Inception-v4 and
Inception-ResNet-v2 networks. This is the input part of those net-
works. Cf. Figures 9 and 151x1 Conv
(96)1x1 Conv
(64)1x1 Conv
(64)3x3 Conv
(96)3x3 Conv
(96)3x3 Conv
(96)Filter concat
Filter concat Avg Pooling 1x1 Conv
(96)Figure 4. The schema for 3535grid modules of the pure
Inception-v4 network. This is the Inception-A block of Figure 9.
1x1 Conv
(384)
1x1 Conv
(192)
1x1 Conv
(192) 1x7 Conv
(224)
1x7 Conv
(192) 7x1 Conv
(224) Filter concat
Filter concat Avg Pooling 1x1 Conv
(128) 1x7 Conv
(256) 1x7 Conv
(224) 7x1 Conv
(256)
Figure 5. The schema for 1717grid modules of the pure
Inception-v4 network. This is the Inception-B block of Figure 9.
1x1 Conv
(256)
1x1 Conv
(384)
1x1 Conv
(384) 3x1 Conv
(256)
1x3 Conv
(448) 3x1 Conv
(512) Filter concat
Filter concat Avg Pooling 1x1 Conv
(256)
1x3 Conv
(256) 1x3 Conv
(256) 3x1 Conv
(256)
Figure 6. The schema for 88grid modules of the pure Inception-
v4 network. This is the Inception-C block of Figure 9.
1x1 Conv
(k)3x3 Conv
(n stride 2 V) 3x3 Conv
(l)3x3 Conv
(m stride 2 V) Filter concat
Filter concat 3x3 MaxPool
(stride 2 V) Figure 7. The schema for 3535to1717reduction module.
Different variants of this blocks (with various number of filters)
are used in Figure 9, and 15 in each of the new Inception(-v4, -
ResNet-v1, -ResNet-v2) variants presented in this paper. The k,l,
m,nnumbers represent filter bank sizes which can be looked up
in Table 1.
1x1 Conv
(256) 1x1 Conv
(192) 1x7 Conv
(256) 3x3 Conv
(320 stride 2 V) Filter concat
Filter concat 3x3 MaxPool
(stride 2 V) 3x3 Conv
(192 stride 2 V)
7x1 Conv
(320)
Figure 8. The schema for 1717to88grid-reduction mod-
ule. This is the reduction module used by the pure Inception-v4
network in Figure 9.Stem
Input (299x299x3) 299x299x3 4 x Inception-A
Output: 35x35x384 Output: 35x35x384 Reduction-A Output: 17x17x1024 7 x Inception-B 3 x Inception-C
Reduction-B Avarage Pooling Dropout (keep 0.8)
Output: 17x17x1024 Output: 8x8x1536 Output: 8x8x1536 Output: 1536 Softmax
Output: 1536 Output: 1000 Figure 9. The overall schema of the Inception-v4 network. For the
detailed modules, please refer to Figures 3, 4, 5, 6, 7 and 8 for the
detailed structure of the various components.
1x1 Conv
(32)
1x1 Conv
(32)1x1 Conv
(32)3x3 Conv
(32)3x3 Conv
(32)3x3 Conv
(32)1x1 Conv
(256 Linear) +Relu activation
Relu activation
Figure 10. The schema for 3535grid (Inception-ResNet-A)
module of Inception-ResNet-v1 network.
1x1 Conv
(128)
1x1 Conv
(128) 1x7 Conv
(128) 7x1 Conv
(128) 1x1 Conv
(896 Linear) +Relu activation
Relu activation Figure 11. The schema for 1717grid (Inception-ResNet-B)
module of Inception-ResNet-v1 network.
1x1 Conv
(256) 3x3 Conv
(256 stride 2 V) Filter concat
Previous
Layer 3x3 MaxPool
(stride 2 V) 3x3 Conv
(384 stride 2 V)
3x3 Conv
(256)
1x1 Conv
(256) 3x3 Conv
(256 stride 2 V)
1x1 Conv
(256)
Figure 12. “Reduction-B” 1717to88grid-reduction module.
This module used by the smaller Inception-ResNet-v1 network in
Figure 15.1x1 Conv
(192)
1x1 Conv
(192) 1x3 Conv
(192) 3x1 Conv
(192) 1x1 Conv
(1792 Linear) +Relu activation
Relu activation Figure 13. The schema for 88grid (Inception-ResNet-C) module
of Inception-ResNet-v1 network.
3x3 Conv
(32 stride 2 V )
Input
(299x299x3) 3x3 Conv
(32 V) 3x3 Conv
(64)3x3 MaxPool
(stride 2 V) 1x1 Conv
(80)
299x299x3 149x149x32 147x147x32 147x147x64 73x73x64 73x73x80 3x3 Conv
(192 V) 71x71x192 3x3 Conv
(256 stride 2 V) 35x35x256
Figure 14. The stem of the Inception-ResNet-v1 network.Stem
Input (299x299x3) 299x299x3 5 x Inception-resnet-A
Output: 35x35x256 Output: 35x35x256 Reduction-A Output: 17x17x896 10 x
Inception-resnet-B 5 x Inception-resnet-C
Reduction-B Average Pooling Dropout (keep 0.8)
Output: 17x17x896 Output: 8x8x1792 Output: 8x8x1792 Output: 1792 Softmax
Output: 1792 Output: 1000 Figure 15. Schema for Inception-ResNet-v1 and Inception-
ResNet-v2 networks. This schema applies to both networks but
the underlying components differ. Inception-ResNet-v1 uses the
blocks as described in Figures 14, 10, 7, 11, 12 and 13. Inception-
ResNet-v2 uses the blocks as described in Figures 3, 16, 7,17, 18
and 19. The output sizes in the diagram refer to the activation
vector tensor shapes of Inception-ResNet-v1.1x1 Conv
(32)
1x1 Conv
(32)1x1 Conv
(32)3x3 Conv
(32)3x3 Conv
(48)3x3 Conv
(64)1x1 Conv
(384 Linear) +Relu activation
Relu activation Figure 16. The schema for 3535grid (Inception-ResNet-A)
module of the Inception-ResNet-v2 network.
1x1 Conv
(192)
1x1 Conv
(128) 1x7 Conv
(160) 7x1 Conv
(192) 1x1 Conv
(1154 Linear) +Relu activation
Relu activation
Figure 17. The schema for 1717grid (Inception-ResNet-B)
module of the Inception-ResNet-v2 network.
1x1 Conv
(256) 3x3 Conv
(320 stride 2 V) Filter concat
Previous
Layer 3x3 MaxPool
(stride 2 V) 3x3 Conv
(384 stride 2 V)
3x3 Conv
(288)
1x1 Conv
(256) 3x3 Conv
(288 stride 2 V)
1x1 Conv
(256) Figure 18. The schema for 1717to88grid-reduction mod-
ule. Reduction-B module used by the wider Inception-ResNet-v1
network in Figure 15.
1x1 Conv
(192)
1x1 Conv
(192) 1x3 Conv
(224) 3x1 Conv
(256) 1x1 Conv
(2048 Linear) +Relu activation
Relu activation
Figure 19. The schema for 88grid (Inception-ResNet-C) module
of the Inception-ResNet-v2 network.
Network k l m n
Inception-v4 192 224 256 384
Inception-ResNet-v1 192 192 256 384
Inception-ResNet-v2 256 256 384 384
Table 1. The number of filters of the Reduction-A module for the
three Inception variants presented in this paper. The four numbers
in the colums of the paper parametrize the four convolutions of
Figure 7Activation
Scaling +Relu activation
Relu activation Inception Figure 20. The general schema for scaling combined Inception-
resnet moduels. We expect that the same idea is useful in the gen-
eral resnet case, where instead of the Inception block an arbitrary
subnetwork is used. The scaling block just scales the last linear
activations by a suitable constant, typically around 0.1.
3.3. Scaling of the Residuals
Also we found that if the number of filters exceeded
1000, the residual variants started to exhibit instabilities and
the network has just “died” early in the training, meaning
that the last layer before the average pooling started to pro-
duce only zeros after a few tens of thousands of iterations.
This could not be prevented, neither by lowering the learn-
ing rate, nor by adding an extra batch-normalization to this
layer.
We found that scaling down the residuals before adding
them to the previous layer activation seemed to stabilize the
training. In general we picked some scaling factors between
0.1 and 0.3 to scale the residuals before their being added to
the accumulated layer activations (cf. Figure 20).
A similar instability was observed by He et al. in [5] in
the case of very deep residual networks and they suggested a
two-phase training where the first “warm-up” phase is done
with very low learning rate, followed by a second phase
with high learning rata. We found that if the number of
filters is very high, then even a very low (0.00001) learning
rate is not sufficient to cope with the instabilities and the
training with high learning rate had a chance to destroy its
effects. We found it much more reliable to just scale the
residuals.
Even where the scaling was not strictly necessary, it
never seemed to harm the final accuracy, but it helped to
stabilize the training.
4. Training Methodology
We have trained our networks with stochastic gradient
utilizing the TensorFlow [1] distributed machine learning
system using 20replicas running each on a NVidia Kepler
GPU. Our earlier experiments used momentum [13] with a
decay of 0:9, while our best models were achieved using
20 40 60 80 100 120 140 160 180 200
Epoch151617181920212223242526272829Error (top-1) %
inception-v3
inception-resnet-v1Figure 21. Top-1 error evolution during training of pure Inception-
v3 vs a residual network of similar computational cost. The eval-
uation is measured on a single crop on the non-blacklist images of
the ILSVRC-2012 validation set. The residual model was train-
ing much faster, but reached slightly worse final accuracy than the
traditional Inception-v3.
RMSProp [16] with decay of 0:9and= 1:0. We used a
learning rate of 0:045, decayed every two epochs using an
exponential rate of 0:94. Model evaluations are performed
using a running average of the parameters computed over
time.
5. Experimental Results
First we observe the top-1 and top-5 validation-error evo-
lution of the four variants during training. After the exper-
iment was conducted, we have found that our continuous
evaluation was conducted on a subset of the validation set
which omitted about 1700 blacklisted entities due to poor
bounding boxes. It turned out that the omission should
have been only performed for the CLSLOC benchmark, but
yields somewhat incomparable (more optimistic) numbers
when compared to other reports including some earlier re-
ports by our team. The difference is about 0.3% for top- 1
error and about 0.15% for the top- 5error. However, since
the differences are consistent, we think the comparison be-
tween the curves is a fair one.
On the other hand, we have rerun our multi-crop and en-
semble results on the complete validation set consisting of
50000 images. Also the final ensemble result was also per-
formed on the test set and sent to the ILSVRC test server
for validation to verify that our tuning did not result in an
over-fitting. We would like to stress that this final validation
was done only once and we have submitted our results only
twice in the last year: once for the BN-Inception paper and
later during the ILSVR-2015 CLSLOC competition, so we
believe that the test set numbers constitute a true estimate
of the generalization capabilities of our model.
Finally, we present some comparisons, between various
versions of Inception and Inception-ResNet. The models
Inception-v3 and Inception-v4 are deep convolutional net-20 40 60 80 100 120 140 160 180 200
Epoch3.03.54.04.55.05.56.06.57.07.58.08.59.09.5Error (top-5) %
inception-v3
inception-resnet-v1Figure 22. Top-5 error evolution during training of pure Inception-
v3 vs a residual Inception of similar computational cost. The eval-
uation is measured on a single crop on the non-blacklist images of
the ILSVRC-2012 validation set. The residual version has trained
much faster and reached slightly better final recall on the valida-
tion set.
20 40 60 80 100 120 140 160
Epoch1516171819202122232425262728293031323334Error (top-1) %
inception-v4
inception-resnet-v2
Figure 23. Top-1 error evolution during training of pure Inception-
v3 vs a residual Inception of similar computational cost. The eval-
uation is measured on a single crop on the non-blacklist images of
the ILSVRC-2012 validation set. The residual version was train-
ing much faster and reached slightly better final accuracy than the
traditional Inception-v4.
Network Top-1 Error Top-5 Error
BN-Inception [6] 25.2% 7.8%
Inception-v3 [15] 21.2% 5.6%
Inception-ResNet-v1 21.3% 5.5%
Inception-v4 20.0% 5.0%
Inception-ResNet-v2 19.9% 4.9%
Table 2. Single crop - single model experimental results. Reported
on the non-blacklisted subset of the validation set of ILSVRC
2012.
works not utilizing residual connections while Inception-
ResNet-v1 and Inception-ResNet-v2 are Inception style net-
works that utilize residual connections instead of filter con-
catenation.
Table 2 shows the single-model, single crop top-1 and
top-5 error of the various architectures on the validation set.
20 40 60 80 100 120 140 160
Epoch3456789Error (top-5) %
inception-v4
inception-resnet-v2Figure 24. Top-5 error evolution during training of pure Inception-
v4 vs a residual Inception of similar computational cost. The eval-
uation is measured on a single crop on the non-blacklist images
of the ILSVRC-2012 validation set. The residual version trained
faster and reached slightly better final recall on the validation set.
20 40 60 80 100 120 140 160
Epoch2.53.03.54.04.55.05.56.06.57.07.58.08.59.09.5Error (top-5) %
inception-v4
inception-resnet-v2
inception-v3
inception-resnet-v1
Figure 25. Top-5 error evolution of all four models (single model,
single crop). Showing the improvement due to larger model size.
Although the residual version converges faster, the final accuracy
seems to mainly depend on the model size.
20 40 60 80 100 120 140 160
Epoch181920212223242526272829Error (top-1) %
inception-v4
inception-resnet-v2
inception-v3
inception-resnet-v1
Figure 26. Top-1 error evolution of all four models (single model,
single crop). This paints a similar picture as the top-5 evaluation.
Table 3 shows the performance of the various models
with a small number of crops: 10 crops for ResNet as was
reported in [5]), for the Inception variants, we have used the
12 crops evaluation as as described in [14].Network Crops Top-1 Error Top-5 Error
ResNet-151 [5] 10 21.4% 5.7%
Inception-v3 [15] 12 19.8% 4.6%
Inception-ResNet-v1 12 19.8% 4.6%
Inception-v4 12 18.7% 4.2%
Inception-ResNet-v2 12 18.7% 4.1%
Table 3. 10/12 crops evaluations - single model experimental re-
sults. Reported on the all 50000 images of the validation set of
ILSVRC 2012.
Network Crops Top-1 Error Top-5 Error
ResNet-151 [5] dense 19.4% 4.5%
Inception-v3 [15] 144 18.9% 4.3%
Inception-ResNet-v1 144 18.8% 4.3%
Inception-v4 144 17.7% 3.8%
Inception-ResNet-v2 144 17.8% 3.7%
Table 4. 144 crops evaluations - single model experimental results.
Reported on the all 50000 images of the validation set of ILSVRC
2012.
Network Models Top-1 Error Top-5 Error
ResNet-151 [5] 6 – 3.6%
Inception-v3 [15] 4 17.3% 3.6%
Inception-v4 +
3Inception-ResNet-v24 16.5% 3.1%
Table 5. Ensemble results with 144 crops/dense evaluation. Re-
ported on the all 50000 images of the validation set of ILSVRC
2012. For Inception-v4(+Residual), the ensemble consists of one
pure Inception-v4 and three Inception-ResNet-v2 models and were
evaluated both on the validation and on the test-set. The test-set
performance was 3:08% top-5 error verifying that we don’t over-
fit on the validation set.
Table 4 shows the single model performance of the var-
ious models using. For residual network the dense evalua-
tion result is reported from [5]. For the inception networks,
the 144 crops strategy was used as described in [14].
Table 5 compares ensemble results. For the pure resid-
ual network the 6 models dense evaluation result is reported
from [5]. For the inception networks 4 models were ensem-
bled using the 144 crops strategy as described in [14].
6. Conclusions
We have presented three new network architectures in
detail:
Inception-ResNet-v1: a hybrid Inception version that
has a similar computational cost to Inception-v3
from [15].
Inception-ResNet-v2: a costlier hybrid Inception ver-
sion with significantly improved recognition perfor-
mance.Inception-v4: a pure Inception variant without residual
connections with roughly the same recognition perfor-
mance as Inception-ResNet-v2.
We studied how the introduction of residual connections
leads to dramatically improved training speed for the Incep-
tion architecture. Also our latest models (with and without
residual connections) outperform all our previous networks,
just by virtue of the increased model size.
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen,
C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe-
mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y . Jia,
R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man ´e,
R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster,
J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker,
V . Vanhoucke, V . Vasudevan, F. Vi ´egas, O. Vinyals, P. War-
den, M. Wattenberg, M. Wicke, Y . Yu, and X. Zheng. Tensor-
Flow: Large-scale machine learning on heterogeneous sys-
tems, 2015. Software available from tensorflow.org.
[2] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao,
A. Senior, P. Tucker, K. Yang, Q. V . Le, et al. Large scale dis-
tributed deep networks. In Advances in Neural Information
Processing Systems , pages 1223–1231, 2012.
[3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep
convolutional network for image super-resolution. In Com-
puter Vision–ECCV 2014 , pages 184–199. Springer, 2014.
[4] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea-
ture hierarchies for accurate object detection and semantic
segmentation. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR) , 2014.
[5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn-
ing for image recognition. arXiv preprint arXiv:1512.03385 ,
2015.
[6] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. In
Proceedings of The 32nd International Conference on Ma-
chine Learning , pages 448–456, 2015.
[7] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar,
and L. Fei-Fei. Large-scale video classification with con-
volutional neural networks. In Computer Vision and Pat-
tern Recognition (CVPR), 2014 IEEE Conference on , pages
1725–1732. IEEE, 2014.
[8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet
classification with deep convolutional neural networks. In
Advances in neural information processing systems , pages
1097–1105, 2012.
[9] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv
preprint arXiv:1312.4400 , 2013.
[10] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional
networks for semantic segmentation. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recogni-
tion, pages 3431–3440, 2015.
[11] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh,
S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,et al. Imagenet large scale visual recognition challenge.
2014.
[12] K. Simonyan and A. Zisserman. Very deep convolutional
networks for large-scale image recognition. arXiv preprint
arXiv:1409.1556 , 2014.
[13] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the
importance of initialization and momentum in deep learning.
InProceedings of the 30th International Conference on Ma-
chine Learning (ICML-13) , volume 28, pages 1139–1147.
JMLR Workshop and Conference Proceedings, May 2013.
[14] C. Szegedy, W. Liu, Y . Jia, P. Sermanet, S. Reed,
D. Anguelov, D. Erhan, V . Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition ,
pages 1–9, 2015.
[15] C. Szegedy, V . Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.
Rethinking the inception architecture for computer vision.
arXiv preprint arXiv:1512.00567 , 2015.
[16] T. Tieleman and G. Hinton. Divide the gradient by a run-
ning average of its recent magnitude. COURSERA: Neural
Networks for Machine Learning, 4, 2012. Accessed: 2015-
11-05.
[17] A. Toshev and C. Szegedy. Deeppose: Human pose estima-
tion via deep neural networks. In Computer Vision and Pat-
tern Recognition (CVPR), 2014 IEEE Conference on , pages
1653–1660. IEEE, 2014.
[18] N. Wang and D.-Y . Yeung. Learning a deep compact image
representation for visual tracking. In Advances in Neural
Information Processing Systems , pages 809–817, 2013. |
Neural Embeddings for kNN Search in Biological Sequence
Zhihao Chang1, Linzhu Yu2, Yanchao Xu2, Wentao Hu3
1The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China
2College of Computer Science and Technology, Zhejiang University, Hangzhou, China
3Zhejiang Police College, Hangzhou, China
{changzhihao, linzhu, xuyanchao, wthu}@zju.edu.cn
Abstract
Biological sequence nearest neighbor search plays a fun-
damental role in bioinformatics. To alleviate the pain of
quadratic complexity for conventional distance computa-
tion, neural distance embeddings, which project sequences
into geometric space, have been recognized as a promising
paradigm. To maintain the distance order between sequences,
these models all deploy triplet loss and use intuitive methods
to select a subset of triplets for training from a vast selection
space. However, we observed that such training often enables
models to distinguish only a fraction of distance orders, leav-
ing others unrecognized. Moreover, naively selecting more
triplets for training under the state-of-the-art network not only
adds costs but also hampers model performance.
In this paper, we introduce Bio-kNN: a kNN search frame-
work for biological sequences. It includes a systematic triplet
selection method and a multi-head network, enhancing the
discernment of all distance orders without increasing training
expenses. Initially, we propose a clustering-based approach
to partition all triplets into several clusters with similar prop-
erties, and then select triplets from these clusters using an
innovative strategy. Meanwhile, we noticed that simultane-
ously training different types of triplets in the same network
cannot achieve the expected performance, thus we propose
a multi-head network to tackle this. Our network employs
a convolutional neural network (CNN) to extract local fea-
tures shared by all clusters, and then learns a multi-layer per-
ception (MLP) head for each cluster separately. Besides, we
treat CNN as a special head, thereby integrating crucial lo-
cal features which are neglected in previous models into our
model for similarity recognition. Extensive experiments show
that our Bio-kNN significantly outperforms the state-of-the-
art methods on two large-scale datasets without increasing the
training cost.
Introduction
Biological sequence nearest neighbor search plays a fun-
damental role in bioinformatics research and serves as the
cornerstone for numerous tasks, including gene predic-
tion (Chothia and Lesk 1986), homology analysis (Sander
and Schneider 1991), sequence clustering (Steinegger and
S¨oding 2018; Li and Godzik 2021), etc. Traditional methods
for measuring global or local similarity between sequences
Copyright © 2024, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.rely on alignment based on dynamic programming. In this
paper, we focus on the global similarity between sequences,
evaluated by the widely used Needleman-Wunsch (NW) al-
gorithm (Needleman and Wunsch 1970). While the NW al-
gorithm is proficient in calculating sequence similarity with
precision, its inherent quadratic complexity poses signifi-
cant challenges for rapid analysis, particularly when dealing
with large-scale datasets comprising sequences that extend
to hundreds or even thousands of amino acids or nucleotides.
In recent years, embedding-based approaches have
emerged as a promising paradigm for expediting sequence
similarity analysis. These approaches involve projecting se-
quences into a geometric embedding space through an em-
bedding function, such that the distance between sequences
can be approximated by the distance in the embedding
space, which offers a computationally efficient alternative.
These approaches can be broadly divided into two categories
based on the core idea of the embedding function: rule-based
and neural network-based. Rule-based approaches (Sims
et al. 2009; Gao and Qi 2007; Ulitsky et al. 2006; Haubold
et al. 2009; Leimeister and Morgenstern 2014) often rely
on some predefined encoding rules. Several studies (Corso
et al. 2021; Chen et al. 2022) have indicated that, in multiple
tasks, these approaches exhibit inferior performance com-
pared to neural network-based ones. Given this context, we
will not delve into rule-based approaches, and instead con-
centrate on exploring neural network-based approaches.
Existing research on neural network-based meth-
ods (Zheng et al. 2019; Chen et al. 2022; Zhang, Yuan, and
Indyk 2019; Dai et al. 2020; Corso et al. 2021) primarily
focused on various components such as encoding models
and loss functions. These components are tailored to the
task for which the learned embeddings are used. Notably,
certain approaches (Zhang, Yuan, and Indyk 2019; Dai et al.
2020) focus on the learning objective aimed at preserving
distance orders within the embedding space to facilitate
kNN searches. To achieve this goal, these approaches
employ triplet loss (Weinberger and Saul 2009; Hermans,
Beyer, and Leibe 2017) and use intuitive methods to select
triplets in the form (Sacr, Spos, Sneg)for training, in which
Sacris the anchor sequence, Sposis the positive sequence
that has smaller distance to Sacrthan the negative sequence
Sneg. However, we found that the models trained by these
methods exhibit proficiency in distance order recognition
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
38A1
2
3
4
5Triplet-Selection: GRU
Embedding
Embedding6
A1
2
3
4
56A12
3
4 56
A12
3
4 56
Anchor
PositiveNegative
Not SelectedPull
PushTrain-Pair
Wrong
-Rank-PairTriplet-Selection: CNNED
The white numbers in the point indicate the order of distance from the anchorFigure 1: The triplet selection methods used by
GRU (Zhang, Yuan, and Indyk 2019) and CNNED (Dai
et al. 2020). For GRU, the Top-N closest to the anchor
is positive and the others are negative; for CNNED, two
sequences are randomly selected from the Top-K closest to
the anchor, the closer is positive, and the farther is negative.
In this example we set N equal to 2 and K equal to 4.
for only a limited subset of triplets, rather than the entire
set. As illustrated in Figure 1, while certain order relations
may be accurately identified after encoding, overlooked
relations during training can substantially compromise the
results. Such complications stem from that each sequence
lacks a definitive category label, rendering existing tech-
niques ineffective in this context. It might be hypothesized
that increasing the number of triplets for training could
ameliorate this issue. However, our assessments within a
state-of-the-art network indicate that such problem has not
been alleviated while suffering additional training expanses.
In this paper, we introduce Bio-kNN, a biological se-
quence kNN search framework. Bio-kNN aims to notably
improve the recognition accuracy of distance order dis-
tributed throughout the whole space without augmenting
training expenses. The core idea of Bio-kNN is to par-
tition all triplets into several clusters based on certain
properties and learn a feature extraction network for each
cluster. Specifically, Bio-kNN features two main modules:
(1)Triplet selection method. A notable limitation of pre-
vious models is that only a subset of the triplets is consid-
ered during training. In this module, we consider all possi-
ble combinations of triplets. We partition the selection space
into small cells and merge cells with similar distance distri-
butions into several clusters. We then employ an innovative
strategy to select training triplets from these clusters with-out external samples. (2)Multi-head network. We noticed
that merely adding more triplets in the SOTA network does
not improve the performance, we thus propose a multi-head
network to address it. Our network uses CNN as the back-
bone to extract local features, and learns a multi-layer per-
ception head for each cluster to extract global features. Fur-
thermore, we integrate previously overlooked local features
derived from the CNN, which are crucial in discernment.
To summarize, we made four contributions in this paper.
1. We consider the entire selection space instead of subsets,
and propose a clustering-based triplet selection method.
2. We notice that the performance of SOTA network de-
grades when simultaneously training different types of
triplets. A multi-head network is designed to alleviate it.
3. We treat CNN as a special head and integrate crucial local
features into our model for sequence similarity.
4. We conduct extensive experiments on two large-scale
datasets, and the results show that our method signifi-
cantly outperforms the state-of-the-art methods.
Related Work
Rule-Based Approaches. Numerous rule-based approaches
have been proposed over the past few decades, which can
be broadly classified into two categories. The first cate-
gory typically utilizes word frequency statistics with a pre-
defined length (Kariin and Burge 1995) or the information
content of word frequency distribution (Sims et al. 2009;
Gao and Qi 2007) as features to characterize sequence sim-
ilarity. On the other hand, the second category of meth-
ods is based on the concept of sub-strings (Ulitsky et al.
2006; Haubold et al. 2009; Leimeister and Morgenstern
2014). However, it should be noted that all these approaches
are data-independent, and their distance measures rely on
heuristic rules. Several studies have shown that these ap-
proaches exhibit weaker performance compared to neural
network-based approaches across various tasks.
Neural Network-Based Approaches. Notable efforts in
neural networks have been made to approximate distances
for biological sequences in recent years. SENSE (Zheng
et al. 2019) is the first attempt to employ neural networks
for comparison-free sequence analysis by utilizing a con-
volutional neural network. However, SENSE is restricted to
handling sequences of the same length. To address it, As-
Mac (Chen et al. 2022) was proposed, which employs an ap-
proximate string matching algorithm to extract relevant fea-
tures through neural network. Regrettably, the performance
of this approach degrades when dealing with protein se-
quences, primarily due to the massive search space involved.
A research domain closely aligned with our work fo-
cuses on edit distance embedding. The distinction lies in
the NW algorithm’s requirement to normalize the edit dis-
tance by a dynamically varying length, thereby amplifying
the complexity of discerning similarities. CGK (Ostrovsky
and Rabani 2007) embeds the edit distance into the ham-
ming space with a distortion of 2O(√logllog log l), however,
this algorithm is excessively intricate for practical applica-
tion. Zhang et al. (Zhang, Yuan, and Indyk 2019) propose a
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
39Figure 2: Motivating example. For the convenience of observation, the bottom subfigures are the results after comparing with
the model trained by randomly selecting triplets, i.e., for each Sacr, two sequences are randomly selected from the training set,
the closer to SacrisSpos, and the farther is Sneg.
two-layer GRU structure to encode sequences, dividing the
training process into three stages and utilizing three differ-
ent loss functions. Nonetheless, the embedding dimension
generated by this method is relatively high, resulting in sub-
stantial memory consumption. CNNED (Dai et al. 2020) dis-
covers that an untrained random CNN performs comparable
to GRU models, leading to the belief that the CNN is more
suitable for the edit distance embedding than RNN-based
models. NeuroSEED (Corso et al. 2021) explores the poten-
tial of employing global and local transformers to encode
biological sequences, and experimental results also affirm
that convolutional models surpass feedforward and recurrent
models for biological sequence edit distance tasks. Further-
more, NeuroSEED proposes that the hyperbolic space can
better capture the data dependencies among biological se-
quences from the perspective of embedded geometry.
Motivating Example
In this section, we use an example to reveal the limitations of
existing methods. We first model the entire selection space
as an upper triangular area. Then we visualize the distribu-
tion of training triplets and the performance of the trained
model, thus we can easily observe the relationship between
them. Example details are as follows.
Example Setting
We first randomly select 3000 sequences from UniProtKB1
and use 1500 of them as the training set, while the remain-
ing1500 as the test set. Then, we employ the state-of-the-
art pipeline proposed by CNNED (Dai et al. 2020) as the
common training framework, and replace the triplet selec-
tion method with five other methods respectively during
training, including two methods adopted by previous mod-
els: the methods used by CNNED (Dai et al. 2020) and
GRU (Zhang, Yuan, and Indyk 2019), and three methods de-
signed for comparison: Method-3, Method-4, and Method-5.
In Figure 2, we plot the distribution of triplets selected
by these five methods on the training set (top subfigures)
and the distance order recognition results on the test set
(bottom subfigures) respectively. The horizontal and verti-
cal coordinates (i, j)of each subfigure in Figure 2 are all
1https://www.uniprot.org/determined by the triplet (Sacr, Spos, Sneg). For each Sacr,
we first sort other sequences according to the distance be-
tween them and Sacrfrom small to large to form a list, and
the indices iandjofSposandSnegin the list are used as
the abscissa and ordinate, respectively. The difference be-
tween the top and the bottom subfigures is the triplets used
for visualization: (1) We plot top subfigures according to the
triplets obtained in the training set by the five triplet se-
lection methods. The depth of the color indicates the fre-
quency of the corresponding triplet is selected. (2) For the
bottom subfigures, the triplets are all triplets combinations
in the test set, and these subfigures are used to visualize the
results of distance order recognition in the test set. We iter-
ated all triplet combinations in the test set to check whether
the distance between SacrandSposis smaller than the dis-
tance between SacrandSnegafter encoding by model f,
i.e.,diste(f(Sacr), f(Spos))< dist e(f(Sacr), f(Sneg)),
the more frequency of the match, the more vivid the color.
Phenomenon
From Figure 2, we can observe three following phenomena,
including one expected and two inconsistent with the expec-
tation but interesting:
1.Expected. Figure 2(a)-(d) illustrate that sequence dis-
tance order recognition in the test set is highly correlated
with the training triplets. This phenomenon is expected,
as the more triplets the model learns for a region in the
training set, the better it helps distinguish the order of
that region in the test set. However, we can clearly ob-
serve that the model trained by these methods can only
recognize the order of a small part of the whole area.
This observation shows that the model is very limited in
identifying crucial regions that lie beyond its training re-
gion(e.g., let the model in Figure 2(a) recognize the or-
der of the region determined by Method-3). Such limita-
tion greatly affects the effectiveness of the model.
2.UnExpected. Inspired by phenomenon 1, an intuitive
idea is to select more training regions. We thus trained
Method-5, which simultaneously trains the regions se-
lected by Method-1, Method-3, and Method-4. However,
the recognition results are not consistent with our expec-
tation, as shown in the Figure 2(e), although certain re-
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
40gions have been trained, the corresponding regions in the
test set do not have better distance order recognition.
3.UnExpected. These figures also illustrate that the model
has a radiation effect on regions outside the training re-
gion, i.e. even if some regions are not selected, the model
is also better able to recognize the order of this region.
Furthermore, the radiation region produced by the train-
ing region in different positions varies greatly.
Method
To address the issues arising in phenomenon 2, we propose
Bio-kNN, which includes a triplet selection method and a
multi-head network. Its framework is shown in Figure 3.
Triplet Selection Method
Partition Selection Space. As shown in Figure 3(b), we par-
tition the entire selection space formed by the training set
into small cells. Specifically, we use the same setting as that
in the motivating example to model the entire selection space
as an upper triangular area, and the length of two legs of the
triangle is the number of sequences in the training set. In this
setting, for each Sacr, each point in the triangle represents a
triplet, where the abscissa represents the index of the Spos,
and the ordinate represents the index of the Sneg. Then,
we divide the horizontal and vertical axes into Bgroups
respectively based on an equal interval δ, where the hori-
zontal axis is divided into [[x0, x1),[x1, x2), ...,[xB−1, xB)]
and the vertical is [[y0, y1),[y1, y2), ...,[yB−1, yB)]. Thus
the upper triangular area is divided intoPB
i=1ismall cells,
where most of the cells are grids and few are triangles. Then,
the coordinates of each cell can be described as (Xi, Yj),
where Ximeans [xi, xi+1), and Yjmeans [yj, yj+1).
Distribution Statistics in Cell. For each cell after parti-
tioning, we use the interval of the coordinates to count the
horizontal and vertical distributions. This step is inspired by
phenomenon 3 in the motivating example, which shows that
some properties between adjacent regions may be similar.
In this step, we try to use the intuitive distance distribu-
tion as this property. It is worth noting that the possibility
of other properties is not ruled out, which can be studied in
the future. Next, we use an example to illustrate the details
of our approach. Suppose there is a cell with coordinates
(X500,600, Y700,800), we use each sequence in the training
set as Sacrin turn. For each Sacr, we sort other sequences in
the training set according to the NW distance between them
andSacrfrom small to large to form a list l. We then count
the horizontal distance distribution between all sequences
in the list l[500 : 600] andSacrforX500,600, while count
l[700 : 800] forY700,800. In this way, the coordinates of each
cell can be further described as (Xi, Yj), where Ximeans
count ([xi, xi+1)), and Yjmeans count ([yj, yj+1)). Subse-
quent cell coordinates will use this definition by default.
Distance Measurement between Cells. How to measure
the distance between cells with distributions as coordinates
becomes a new problem. Currently, there are many functions
to measure the distance between two distributions, such as
Kullback–Leibler divergence (Kullback and Leibler 1951),
Jensen-Shannon divergence (Fuglede and Topsøe 2004),
(a) Multi-Head Network (Training)
(b) Triplet Selection MethodDistance
MeasurementClusteringGrid
&
CountingOne-hot…
… …
…
Index of
Triplet
Selection
For
Index of
CNNFigure 3: The Framework of Bio-kNN.
Earth mover’s distance (EMD) (Rubner, Tomasi, and Guibas
2000), etc. However, we noticed that when two distributions
do not overlap, the KL divergence is meaningless, and the
JS divergence is a constant, so neither of these functions is
suitable for measuring the distance between cells in our ap-
plication scenario. Considering that EMD as a metric sat-
isfies non-negativity, symmetry, and triangle inequality, we
define the distance between two cells on the basis of EMD.
Specifically, given any two cells pandq, their coordinates
are(Xpi, Ypj)and(Xqi, Yqj)respectively, then we define
the distance dcell(p, q)between pandqis:
dcell(p, q) =EMD (Xpi, Xqi) +EMD (Ypj, Yqj)(1)
We prove that dcell(p, q)between cells is still a metric.
Theorem 1 The distance dcellcomputed by Equation 1 is a
metric. Given three any cells p,q, and r, we have:
(1) Non-negativity. Ifp!=q, then dcell(p, q)>0.
(2) Symmetry. dcell(p, q)=dcell(p, q).
(3) Triangle inequality. dcell(p, r)≤dcell(p, q)+dcell(q, r)
Proof 1 According to the non-negativity and symmetry of
the EMD, it can be easily obtained that dcellalso satisfies
the non-negativity and symmetry, so we will only prove the
triangle inequality of dcell.
dcell(p, r) =EMD (Xpi, Xri) +EMD (Ypj, Yrj)
≤(EMD (Xpi, Xqi) +EMD (Xqi, Xri))
+ (EMD (Ypj, Yqj) +EMD (Yqj, Yrj))
= (EMD (Xpi, Xqi) +EMD (Ypj, Yqj)
+ (EMD (Xqi, Xri) +EMD (Yqj, Yrj))
=dcell(p, q) +dcell(q, r)
Cell Clustering. Our last step is to merge those cells that
have a similar distance distribution. We achieve this using
unsupervised clustering, which is naturally suited to distin-
guishing similar items such that distributions vary widely
across clusters, while the distribution of cells within a single
cluster is very close. In this paper, we do not propose a new
clustering algorithm, but directly deploy existing cluster-
ing algorithms. In following, we evaluated the performance
of commonly used clustering algorithms such as k-means
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
41(Forgy 1965), agglomerative(Murtagh and Contreras 2012),
and spectral clustering(von Luxburg 2007). Subsequent ex-
periments will show more detailed results.
Selection Strategy. Suppose there are mtraining se-
quences and nclusters are obtained based on the above
method. An intuitive selection strategy is that for each Sacr,
We randomly select one point from each of the nclusters at
each epoch, and the abscissa of these npoints is the index
of the Spos, the ordinate is the index of the Sneg. However,
this strategy needs to select m∗ntriplets for training at each
epoch. Clearly, the training cost of this strategy increases lin-
early with mandn, and it will bring burden to the expansion
of the dataset when nhave a large value.
We employ a novel selection strategy that can achieve
good performance without adding more cost. Specifically,
before each epoch of training, we first randomly shuffle all
anchor sequences. Then for each batch, we divide all anchor
sequences in the current batch evenly into nlists, and assign
thenclusters to the nlists as candidate clusters respectively.
The strategy at this time is that for each Sacr, we only ran-
domly select a point from its corresponding candidate clus-
ters instead of all clusters, and the number of training triplets
for each epoch is also changed from m∗nton.
Multi-Head Network
Network Structure. In recent years, several works (Dai
et al. 2020; Corso et al. 2021) have shown that convolutional
models outperform feedforward and recurrent models for se-
quence embedding, so our learning model utilizes the CNN
submodule in CNNED (Dai et al. 2020) as a general back-
bone. Subsequently, multiple multi-layer perceptron (MLP)
heads are deployed in parallel following the convolutional
layers, thereby facilitating the fusion of local features from
different perspectives to extract global features. In this struc-
ture, the number of heads is the same as the number of can-
didate clusters k. Each head has exactly the same structure
and is trained in parallel without communicating with each
other. The core idea of our multi-head model is that we hope
to learn one head for each candidate cluster, thus avoiding
potential contradictions between candidate selection clusters
during training. It is imperative to highlight that our model
exhibits an obvious distinction between the training and in-
ference phases, we introduce them separately below.
Training Phase. During the training phase as shown in
Figure 3(a), we first use the selection method introduced
in the previous section to select a triplet (Si
acr, Si
pos, Si
neg)
for each anchor sequence in a batch. Then, the one-hot em-
bedding representations (Xi
acr,Xi
pos,Xi
neg)of all these
triplets are simultaneously fed into the CNN, which is en-
coded as (yi
acr,yi
pos,yi
neg). After CNN encoding, the flow
of these triplets starts to fork, and triplets selected from dif-
ferent clusters are fed to different MLP heads. Specifically,
The embedding function of our multi-head network during
the training phase can be expressed as follows:
yi
acr,yi
pos,yi
neg=CNN (Xi
acr,Xi
pos,Xi
neg)
zi
acr,zi
pos,zi
neg=MLPi(yi
acr,yi
pos,yi
neg)
seq……
…CNNFigure 4: Multi-head Network (Inference).
after all triplets are encoded by the model, the final loss is:
loss=kX
i=1Loss(zi
acr,zi
pos,zi
neg) (2)
where krepresents both the number of candidate selection
clusters and the number of heads, and Loss is the combina-
tion of triplet loss and MSE loss.
Inference Phase. As depicted in Figure 4, we feed all se-
quences into the trained neutral network one by one during
the inference phase. For each sequence, we use the one-hot
representation Xof this sequence and encode it through
CNN, then feed the feature youtput by CNN to all the
MLP heads simultaneously. The outputs [z1, ...,zk]of these
heads are then all cascaded together. In addition, we treat
CNN as a special head and concatenate the feature youtput
by CNN to the end. We will explain the reason for cascad-
ing CNN features below. The embedding function during the
inference phase can be expressed as follows:
y=CNN (X) (3)
zi=MLPi(y) (4)
the representation of the sequence in embedding space is:
Embedding = [z1, ...,zk,y] (5)
CNN Serves as a Special Head. The core idea of our net-
work is to train distinct MLP heads for each candidate clus-
ter. Each of these heads aims to learn unique weights for the
local features extracted by the CNN, essentially learning the
most discriminative features that can distinguish different
sequences within each cluster. However, fine-grained details
can easily be ignored during learning. To alleviate the po-
tential impact of these fine-grained feature losses, we intro-
duce a compensation measure using CNN as a special head
in the inference stage. Specifically, we concatenate local fea-
tures with the final embedding, which is similar to the effect
of fully connected layers with identity matrix and frozen
weights. This approach effectively counteracts the adverse
consequences of fine-grained features being ignored.
Embedding Geometry. There are many studies using
various functions to calculate the distance between two em-
bedding vectors, including Euclidean distance (Dai et al.
2020), Jaccard distance (Zheng et al. 2019), Hyperbolic dis-
tance (Corso et al. 2021), etc. However, for the multi-head
network we designed, the final embedding of the sequence
is the concatenation of vectors output by multiple heads. In
order to make the features of each head play a bigger role
in the distance calculation, we use a new metric instead of
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
42directly using the Euclidean distance to calculate the dis-
tance between vectors. Specifically, we first calculate the Eu-
clidean distance between the vectors output by a single head,
and then sum the Euclidean distances of multiple heads as
the final distance. Suppose there are two embedding vectors
x= [x1, ...,xk,xcnn]andy= [y1, ...,yk,ycnn], then
the distance between them can be described as:
diste(x,y) =Euc(xcnn,ycnn) +kX
i=1Euc(xi,yi)(6)
Experiments
Experimental Settings
Datasets. We evaluate our neural embeddings through the
utilization of two extensively recognized datasets(Dai et al.
2020; Zhang, Yuan, and Indyk 2019), i.e., the Uniprot and
Uniref. These datasets exhibit varying sizes and sequence
lengths, and their properties are shown in the Table 1. Con-
sistent with existing works, we partition each dataset into
distinct subsets, namely the training set, query set, and base
set. Both the training set and the query set are composed of
1,000 sequences, and the other items belong to the base set.
Dataset Uniprot Uniref
Alphabet Size 25 24
# Items 474741 395869
Avg-Length 376.47 442.84
Min-Length 2 201
Max-Length 4998 4998
Table 1: Dataset Statistics
Metrics. We follow existing works (Zhang, Yuan, and In-
dyk 2019; Dai et al. 2020) and use the task of nearest neigh-
bor search to evaluate the effectiveness of our model, i.e.
whether the distance order in the embedding space still pre-
serves. Specifically, we use: (1) Top-k hitting ratio (HR@k ).
This metric is used to detect the overlap percentage of the
top-k results and the ground truth. (2) Top-1 Recall. This
one evaluates the performance of finding the most similar
sequence to the query sequence by different methods.
Baselines. We adopt previous network-based approaches
as baselines, including GRU (Zhang, Yuan, and Indyk 2019),
CNNED (Dai et al. 2020), NeuroSEED (Corso et al. 2021),
AsMac (Chen et al. 2022), where NeuroSEED can be fur-
ther divided into Global (Global T.) and Local Transformer
(Local T.). Since SENSE (Zheng et al. 2019) cannot be used
for unequal-length datasets, and its performance has been
proven to be weaker than AsMac, we will not use it as a
baseline. To demonstrate the effectiveness of the selection
method and multi-head network, we use Bio-kNN-Base to
denote the method without cascading CNN features, and re-
fer to the complete method as Bio-kNN.
Implementation Details. We use the EMBOSS1to com-
pute the NW distance between sequences. In our implemen-
1https://www.ebi.ac.uk/Tools/emboss/tation, we set the split interval δ= 100 and experimen-
tally tested the effect of various clustering algorithms and
the number of clusters. Besides, we directly used the CNN
submodule in CNNED. Code and datasets are available at
https://github.com/Proudc/Bio-KNN.
Experimental results
Clustering-Based Triplet Selection. Tables 2 and 3 show
the performance of Bio-kNN-Base under various cluster-
ing algorithms and the number of clusters, including k-
means, agglomerative (HAC), spectral clustering, and non-
clustering. These results show that:(1) With a fixed output
dimension (128), the performance of Bio-kNN-Base con-
sistently surpasses the non-clustering counterpart in various
algorithms and the number of clusters, reaffirming the in-
dispensability of segmenting the selection space. (2) HAC
shows superior performance within certain configurations in
contrast to the other two methods. This may be attributed to
the ability of HAC to handle outlier cells more efficiently
relative to other techniques, which also prompted us to use
the HAC by default in subsequent experiments.
#Clusters*(D/h)
Method HR@1 HR@10 HR@50
1∗128 None 48.30 35.48 24.21
2∗64 K-Means 48.60 36
.51 25. 19
2∗64 HAC 48.60 36. 51 25. 19
2∗64 Spectral 48.60 36. 51 25. 19
4∗32 K-Means 49.90 38. 58 26. 98
4 * 32 HAC 50.50 39.13 27.28
4∗32 Spectral 49.00 36. 60 25. 23
8∗16 K-Means 49.70 37. 90 26. 00
8∗16 HAC 48.80 37. 52 25. 70
8∗16 Spectral 48.30 36. 23 24. 86
Note: D/h indicates the output dimension of each head.
Table 2: Uniprot: various clustering methods and # clusters
# Clusters*(D/h)
Method HR@1 HR@10 HR@50
1∗128 None 28.30 24.39 15.60
2∗64 K-Means 31.10 26
.91 17. 54
2 * 64 HAC 33.90 29.88 19.58
2∗64 Spectral 29.30 25. 88 16. 84
4∗32 K-Means 30.70 25. 83 16. 63
4∗32 HAC 32.40 26. 92 17. 42
4∗32 Spectral 30.00 25. 93 16. 91
8∗16 K-Means 31.70 26. 80 17. 41
8∗16 HAC 32.20 26. 57 17. 34
8∗16 Spectral 31.20 25. 67 16. 90
Table 3: Uniref: various clustering methods and # clusters
Embedding Effectiveness. Table 4 presents an overview
of the performance exhibited by different methods concern-
ing the top-k similarity search task. As shown, on both
datasets, our method Bio-kNN significantly outperforms all
methods on all metrics. Using the Uniprot dataset as an ex-
ample, Bio-kNN yields a remarkable enhancement across
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
43Unipr ot Uniref
Model HR@1 HR@5
HR@10 HR@50 HR@1 HR@5
HR@10 HR@50
AsMac 47.07 32
.60 24. 25 9. 93 20.57 11
.93 8. 08 2. 68
GRU 40.83 40
.05 34. 53 23. 16 30.73 26
.53 22. 73 13. 62
CNNED 47.70 40
.43 34. 58 23. 37 35.13 32
.51 28. 55 18. 72
Global T. 48.76 39
.97 34. 16 22. 29 27.80 22
.38 18. 67 10. 47
Local T. 49.10 40
.11 34. 27 22. 43 27.07 21
.23 17. 94 10. 20
Bio-kNN 54.00 48.31
42.69 30.28 37.60 36.18
32.51 21.13
Gap With SOTA +4.90 +7.88
+8.11 +6 .91 +2.47 +3.67
+3.96 +2 .41
Table 4: Embedding Results (repeat three times and report average results)
100101102103104
# Items [k]405060708090100 T op-1 Recall
(a) Uniprot100101102103104
# Items [k]20406080100 T op-1 Recall
(b) UnirefAsMac
GRUGlocal T.
Local T.CNNED Bio-kNN
Figure 5: Top-1 Recall curves for multiple methods.
metrics, ranging from 4.9% to 8.11% when compared to
the state-of-the-art counterparts. Notably, a substantial ma-
jority of metrics experience an augmentation of over 6%.
This non-negligible improvement is impressive given the
fact that, unlike previous methods that only focus on partial
subsets of triplets, Bio-kNN essentially partitions the entire
selection space and learns individual heads for each distinct
subspace. Besides, Bio-kNN incorporates the fine-grained
local features extracted by CNN, which further improves
its ability to distinguish similarities between sequences. We
plot the curves of Top-1 recall for various methods on dif-
ferent datasets in Figure 5. We observe that our model also
achieves significant performance gains on the task of finding
the most similar sequence compared to other methods.
Ablation Studies. Our Bio-kNN comprises three mod-
ules: clustering-based triplet selection, a multi-head net-
work, and CNN features. We conduct the following exper-
iments to validate the contributions of these modules: (1)
Considering that the necessity of segmenting the space has
been verified in Table 2 and 3, we exclusively explore spe-
cific segmentation methods. We thus independently evaluate
the segmentation outcomes on both sides of Figure 6. (2) Re-
placing the multi-head (M) network with a single-head (S)
network. (3) Omitting the features extracted by CNN.
The results in Table 5 demonstrate that neglecting any
of the three modules leads to a reduction in performance.
The reason is that we take into account the distance distri-
bution among cells when segmenting the selection space.
0 200 400 600 800 1000
Index of Spos02004006008001000 Index of Sneg
(a) Uniprot: HAC-Based0 200 400 600 800 1000
Index of Spos02004006008001000 Index of Sneg
(b) Uniprot: Average-Based
0 200 400 600 800 1000
Index of Spos02004006008001000 Index of Sneg
(c) Uniref: HAC-Based0 200 400 600 800 1000
Index of Spos02004006008001000 Index of Sneg
(d) Uniref: Average-BasedFigure 6: Segmentation Results of HAC(H) and Average(A)
Datasets Method HR@1
HR@10 HR@50
UniprotH+
S + CNN 53.20 41. 02 28. 56
A+
M + CNN 52.03 40. 31 27. 93
H+
M 50.40 39. 01 27. 26
H+
M + CNN 54.00 42.69 30.28
UnirefH+
S + CNN 35.63 30. 31 19. 75
A+
M + CNN 35.43 30. 19 19. 60
H+
M 33.67 28. 95 18. 86
H+
M + CNN 37.60 32.51 21.13
Table 5: Ablation Studies Results
Separate heads are assembled for clusters with large dif-
ferences in distribution, making training more targeted. The
fine-grained features extracted by CNN also effectively en-
hance the model’s ability to distinguish sequence similarity.
Conclusion
We propose Bio-kNN for biological nearest neighbor search,
which includes a clustering-based triplet selection method
and a CNN-based multi-head network. It also incorporates
local features extracted by CNN. Experimental results show
that Bio-kNN outperforms the state-of-the-art.
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
44Acknowledgments
This work is supported by the Fundamental Research Funds
for the Central Universities(No.226-2022-00028). The au-
thors would like to thank Zepeng Li for his help with this
work, including analysis and discussions.
References
Chen, J.; Yang, L.; Li, L.; Goodison, S.; and Sun, Y . 2022.
Alignment-free comparison of metagenomics sequences via
approximate string matching. Bioinformatics Advances,
2(1): vbac077.
Chothia, C.; and Lesk, A. M. 1986. The relation between
the divergence of sequence and structure in proteins. The
EMBO journal, 5(4): 823–826.
Corso, G.; Ying, Z.; P ´andy, M.; Velickovic, P.; Leskovec, J.;
and Li `o, P. 2021. Neural Distance Embeddings for Biologi-
cal Sequences. In NeurIPS, 18539–18551.
Dai, X.; Yan, X.; Zhou, K.; Wang, Y .; Yang, H.; and Cheng,
J. 2020. Convolutional Embedding for Edit Distance. In
ACM SIGIR, 599–608. ACM.
Forgy, E. W. 1965. Cluster analysis of multivariate data: ef-
ficiency versus interpretability of classifications. biometrics,
21: 768–769.
Fuglede, B.; and Topsøe, F. 2004. Jensen-Shannon diver-
gence and Hilbert space embedding. In ISIT, 31. IEEE.
Gao, L.; and Qi, J. 2007. Whole genome molecular phy-
logeny of large dsDNA viruses using composition vector
method. BMC evolutionary biology, 7(1): 1–7.
Haubold, B.; Pfaffelhuber, P.; Domazet-Los˘ o, M.; and
Wiehe, T. 2009. Estimating mutation distances from un-
aligned genomes. Journal of Computational Biology,
16(10): 1487–1500.
Hermans, A.; Beyer, L.; and Leibe, B. 2017. In defense of
the triplet loss for person re-identification. arXiv preprint
arXiv:1703.07737.
Kariin, S.; and Burge, C. 1995. Dinucleotide relative abun-
dance extremes: a genomic signature. Trends in genetics,
11(7): 283–290.
Kullback, S.; and Leibler, R. A. 1951. On information and
sufficiency. The annals of mathematical statistics, 22(1):
79–86.
Leimeister, C.-A.; and Morgenstern, B. 2014. Kmacs:
the k-mismatch average common substring approach to
alignment-free sequence comparison. Bioinformatics,
30(14): 2000–2008.
Li, W.; and Godzik, A. 2021. Cd-hit: a fast program for
clustering and comparing large sets of protein or nucleotide
sequences. Bioinformatics 22, 1658–1659 (2006). Scientific
Reports, 11: 3702.
Murtagh, F.; and Contreras, P. 2012. Algorithms for hierar-
chical clustering: an overview. WIREs Data Mining Knowl.
Discov., 2(1): 86–97.
Needleman, S. B.; and Wunsch, C. D. 1970. A general
method applicable to the search for similarities in the amino
acid sequence of two proteins. Journal of molecular biology,
48(3): 443–453.Ostrovsky, R.; and Rabani, Y . 2007. Low distortion embed-
dings for edit distance. J. ACM, 54(5): 23.
Rubner, Y .; Tomasi, C.; and Guibas, L. J. 2000. The Earth
Mover’s Distance as a Metric for Image Retrieval. IJCV,
40(2): 99–121.
Sander, C.; and Schneider, R. 1991. Database of homology-
derived protein structures and the structural meaning of se-
quence alignment. Proteins: Structure, Function, and Bioin-
formatics, 9(1): 56–68.
Sims, G. E.; Jun, S.-R.; Wu, G. A.; and Kim, S.-H. 2009.
Alignment-free genome comparison with feature frequency
profiles (FFP) and optimal resolutions. Proceedings of the
National Academy of Sciences, 106(8): 2677–2682.
Steinegger, M.; and S ¨oding, J. 2018. Clustering huge protein
sequence sets in linear time. Nature communications, 9(1):
1–8.
Ulitsky, I.; Burstein, D.; Tuller, T.; and Chor, B. 2006. The
average common substring approach to phylogenomic re-
construction. Journal of Computational Biology, 13(2):
336–350.
von Luxburg, U. 2007. A tutorial on spectral clustering. Stat.
Comput., 17(4): 395–416.
Weinberger, K. Q.; and Saul, L. K. 2009. Distance met-
ric learning for large margin nearest neighbor classification.
Journal of machine learning research, 10(2).
Zhang, X.; Yuan, Y .; and Indyk, P. 2019. Neural embeddings
for nearest neighbor search under edit distance.
Zheng, W.; Yang, L.; Genco, R. J.; Wactawski-Wende, J.;
Buck, M.; and Sun, Y . 2019. SENSE: Siamese neural net-
work for sequence embedding and alignment-free compari-
son. Bioinform., 35(11): 1820–1828.
The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
45 |
Provided proper attribution is provided, Google hereby grants permission to
reproduce the tables and figures in this paper solely for use in journalistic or
scholarly works.
Attention Is All You Need
Ashish Vaswani∗
Google Brain
avaswani@google.comNoam Shazeer∗
Google Brain
noam@google.comNiki Parmar∗
Google Research
nikip@google.comJakob Uszkoreit∗
Google Research
usz@google.com
Llion Jones∗
Google Research
llion@google.comAidan N. Gomez∗ †
University of Toronto
aidan@cs.toronto.eduŁukasz Kaiser∗
Google Brain
lukaszkaiser@google.com
Illia Polosukhin∗ ‡
illia.polosukhin@gmail.com
Abstract
The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks that include an encoder and a decoder. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer,
based solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to
be superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-
to-German translation task, improving over the existing best results, including
ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,
our model establishes a new single-model state-of-the-art BLEU score of 41.8 after
training for 3.5 days on eight GPUs, a small fraction of the training costs of the
best models from the literature. We show that the Transformer generalizes well to
other tasks by applying it successfully to English constituency parsing both with
large and limited training data.
∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started
the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and
has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head
attention and the parameter-free position representation and became the other person involved in nearly every
detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and
tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and
efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and
implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating
our research.
†Work performed while at Google Brain.
‡Work performed while at Google Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.arXiv:1706.03762v7 [cs.CL] 2 Aug 20231 Introduction
Recurrent neural networks, long short-term memory [ 13] and gated recurrent [ 7] neural networks
in particular, have been firmly established as state of the art approaches in sequence modeling and
transduction problems such as language modeling and machine translation [ 35,2,5]. Numerous
efforts have since continued to push the boundaries of recurrent language models and encoder-decoder
architectures [38, 24, 15].
Recurrent models typically factor computation along the symbol positions of the input and output
sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden
states ht, as a function of the previous hidden state ht−1and the input for position t. This inherently
sequential nature precludes parallelization within training examples, which becomes critical at longer
sequence lengths, as memory constraints limit batching across examples. Recent work has achieved
significant improvements in computational efficiency through factorization tricks [ 21] and conditional
computation [ 32], while also improving model performance in case of the latter. The fundamental
constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduc-
tion models in various tasks, allowing modeling of dependencies without regard to their distance in
the input or output sequences [ 2,19]. In all but a few cases [ 27], however, such attention mechanisms
are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead
relying entirely on an attention mechanism to draw global dependencies between input and output.
The Transformer allows for significantly more parallelization and can reach a new state of the art in
translation quality after being trained for as little as twelve hours on eight P100 GPUs.
2 Background
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU
[16], ByteNet [ 18] and ConvS2S [ 9], all of which use convolutional neural networks as basic building
block, computing hidden representations in parallel for all input and output positions. In these models,
the number of operations required to relate signals from two arbitrary input or output positions grows
in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes
it more difficult to learn dependencies between distant positions [ 12]. In the Transformer this is
reduced to a constant number of operations, albeit at the cost of reduced effective resolution due
to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as
described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions
of a single sequence in order to compute a representation of the sequence. Self-attention has been
used successfully in a variety of tasks including reading comprehension, abstractive summarization,
textual entailment and learning task-independent sentence representations [4, 27, 28, 22].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-
aligned recurrence and have been shown to perform well on simple-language question answering and
language modeling tasks [34].
To the best of our knowledge, however, the Transformer is the first transduction model relying
entirely on self-attention to compute representations of its input and output without using sequence-
aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate
self-attention and discuss its advantages over models such as [17, 18] and [9].
3 Model Architecture
Most competitive neural sequence transduction models have an encoder-decoder structure [ 5,2,35].
Here, the encoder maps an input sequence of symbol representations (x1, ..., x n)to a sequence
of continuous representations z= (z1, ..., z n). Given z, the decoder then generates an output
sequence (y1, ..., y m)of symbols one element at a time. At each step the model is auto-regressive
[10], consuming the previously generated symbols as additional input when generating the next.
2Figure 1: The Transformer - model architecture.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully
connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1,
respectively.
3.1 Encoder and Decoder Stacks
Encoder: The encoder is composed of a stack of N= 6 identical layers. Each layer has two
sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-
wise fully connected feed-forward network. We employ a residual connection [ 11] around each of
the two sub-layers, followed by layer normalization [ 1]. That is, the output of each sub-layer is
LayerNorm( x+ Sublayer( x)), where Sublayer( x)is the function implemented by the sub-layer
itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding
layers, produce outputs of dimension dmodel = 512 .
Decoder: The decoder is also composed of a stack of N= 6identical layers. In addition to the two
sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head
attention over the output of the encoder stack. Similar to the encoder, we employ residual connections
around each of the sub-layers, followed by layer normalization. We also modify the self-attention
sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This
masking, combined with fact that the output embeddings are offset by one position, ensures that the
predictions for position ican depend only on the known outputs at positions less than i.
3.2 Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output,
where the query, keys, values, and output are all vectors. The output is computed as a weighted sum
3Scaled Dot-Product Attention
Multi-Head Attention
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several
attention layers running in parallel.
of the values, where the weight assigned to each value is computed by a compatibility function of the
query with the corresponding key.
3.2.1 Scaled Dot-Product Attention
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of
queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the
query with all keys, divide each by√dk, and apply a softmax function to obtain the weights on the
values.
In practice, we compute the attention function on a set of queries simultaneously, packed together
into a matrix Q. The keys and values are also packed together into matrices KandV. We compute
the matrix of outputs as:
Attention( Q, K, V ) = softmax(QKT
√dk)V (1)
The two most commonly used attention functions are additive attention [ 2], and dot-product (multi-
plicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor
of1√dk. Additive attention computes the compatibility function using a feed-forward network with
a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is
much faster and more space-efficient in practice, since it can be implemented using highly optimized
matrix multiplication code.
While for small values of dkthe two mechanisms perform similarly, additive attention outperforms
dot product attention without scaling for larger values of dk[3]. We suspect that for large values of
dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has
extremely small gradients4. To counteract this effect, we scale the dot products by1√dk.
3.2.2 Multi-Head Attention
Instead of performing a single attention function with dmodel-dimensional keys, values and queries,
we found it beneficial to linearly project the queries, keys and values htimes with different, learned
linear projections to dk,dkanddvdimensions, respectively. On each of these projected versions of
queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional
4To illustrate why the dot products get large, assume that the components of qandkare independent random
variables with mean 0and variance 1. Then their dot product, q·k=Pdk
i=1qiki, has mean 0and variance dk.
4output values. These are concatenated and once again projected, resulting in the final values, as
depicted in Figure 2.
Multi-head attention allows the model to jointly attend to information from different representation
subspaces at different positions. With a single attention head, averaging inhibits this.
MultiHead( Q, K, V ) = Concat(head 1, ...,head h)WO
where head i= Attention( QWQ
i, KWK
i, V WV
i)
Where the projections are parameter matrices WQ
i∈Rdmodel×dk,WK
i∈Rdmodel×dk,WV
i∈Rdmodel×dv
andWO∈Rhdv×dmodel.
In this work we employ h= 8 parallel attention layers, or heads. For each of these we use
dk=dv=dmodel/h= 64 . Due to the reduced dimension of each head, the total computational cost
is similar to that of single-head attention with full dimensionality.
3.2.3 Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
•In "encoder-decoder attention" layers, the queries come from the previous decoder layer,
and the memory keys and values come from the output of the encoder. This allows every
position in the decoder to attend over all positions in the input sequence. This mimics the
typical encoder-decoder attention mechanisms in sequence-to-sequence models such as
[38, 2, 9].
•The encoder contains self-attention layers. In a self-attention layer all of the keys, values
and queries come from the same place, in this case, the output of the previous layer in the
encoder. Each position in the encoder can attend to all positions in the previous layer of the
encoder.
•Similarly, self-attention layers in the decoder allow each position in the decoder to attend to
all positions in the decoder up to and including that position. We need to prevent leftward
information flow in the decoder to preserve the auto-regressive property. We implement this
inside of scaled dot-product attention by masking out (setting to −∞) all values in the input
of the softmax which correspond to illegal connections. See Figure 2.
3.3 Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully
connected feed-forward network, which is applied to each position separately and identically. This
consists of two linear transformations with a ReLU activation in between.
FFN( x) = max(0 , xW 1+b1)W2+b2 (2)
While the linear transformations are the same across different positions, they use different parameters
from layer to layer. Another way of describing this is as two convolutions with kernel size 1.
The dimensionality of input and output is dmodel = 512 , and the inner-layer has dimensionality
dff= 2048 .
3.4 Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input
tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor-
mation and softmax function to convert the decoder output to predicted next-token probabilities. In
our model, we share the same weight matrix between the two embedding layers and the pre-softmax
linear transformation, similar to [ 30]. In the embedding layers, we multiply those weights by√dmodel.
5Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations
for different layer types. nis the sequence length, dis the representation dimension, kis the kernel
size of convolutions and rthe size of the neighborhood in restricted self-attention.
Layer Type Complexity per Layer Sequential Maximum Path Length
Operations
Self-Attention O(n2·d) O(1) O(1)
Recurrent O(n·d2) O(n) O(n)
Convolutional O(k·n·d2) O(1) O(logk(n))
Self-Attention (restricted) O(r·n·d) O(1) O(n/r)
3.5 Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the
order of the sequence, we must inject some information about the relative or absolute position of the
tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel
as the embeddings, so that the two can be summed. There are many choices of positional encodings,
learned and fixed [9].
In this work, we use sine and cosine functions of different frequencies:
PE(pos,2i)=sin(pos/100002i/d model)
PE(pos,2i+1)=cos(pos/100002i/d model)
where posis the position and iis the dimension. That is, each dimension of the positional encoding
corresponds to a sinusoid. The wavelengths form a geometric progression from 2πto10000 ·2π. We
chose this function because we hypothesized it would allow the model to easily learn to attend by
relative positions, since for any fixed offset k,PEpos+kcan be represented as a linear function of
PEpos.
We also experimented with using learned positional embeddings [ 9] instead, and found that the two
versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version
because it may allow the model to extrapolate to sequence lengths longer than the ones encountered
during training.
4 Why Self-Attention
In this section we compare various aspects of self-attention layers to the recurrent and convolu-
tional layers commonly used for mapping one variable-length sequence of symbol representations
(x1, ..., x n)to another sequence of equal length (z1, ..., z n), with xi, zi∈Rd, such as a hidden
layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we
consider three desiderata.
One is the total computational complexity per layer. Another is the amount of computation that can
be parallelized, as measured by the minimum number of sequential operations required.
The third is the path length between long-range dependencies in the network. Learning long-range
dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the
ability to learn such dependencies is the length of the paths forward and backward signals have to
traverse in the network. The shorter these paths between any combination of positions in the input
and output sequences, the easier it is to learn long-range dependencies [ 12]. Hence we also compare
the maximum path length between any two input and output positions in networks composed of the
different layer types.
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially
executed operations, whereas a recurrent layer requires O(n)sequential operations. In terms of
computational complexity, self-attention layers are faster than recurrent layers when the sequence
6length nis smaller than the representation dimensionality d, which is most often the case with
sentence representations used by state-of-the-art models in machine translations, such as word-piece
[38] and byte-pair [ 31] representations. To improve computational performance for tasks involving
very long sequences, self-attention could be restricted to considering only a neighborhood of size rin
the input sequence centered around the respective output position. This would increase the maximum
path length to O(n/r). We plan to investigate this approach further in future work.
A single convolutional layer with kernel width k < n does not connect all pairs of input and output
positions. Doing so requires a stack of O(n/k)convolutional layers in the case of contiguous kernels,
orO(logk(n))in the case of dilated convolutions [ 18], increasing the length of the longest paths
between any two positions in the network. Convolutional layers are generally more expensive than
recurrent layers, by a factor of k. Separable convolutions [ 6], however, decrease the complexity
considerably, to O(k·n·d+n·d2). Even with k=n, however, the complexity of a separable
convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer,
the approach we take in our model.
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions
from our models and present and discuss examples in the appendix. Not only do individual attention
heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic
and semantic structure of the sentences.
5 Training
This section describes the training regime for our models.
5.1 Training Data and Batching
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million
sentence pairs. Sentences were encoded using byte-pair encoding [ 3], which has a shared source-
target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT
2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece
vocabulary [ 38]. Sentence pairs were batched together by approximate sequence length. Each training
batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000
target tokens.
5.2 Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using
the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We
trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the
bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps
(3.5 days).
5.3 Optimizer
We used the Adam optimizer [ 20] with β1= 0.9,β2= 0.98andϵ= 10−9. We varied the learning
rate over the course of training, according to the formula:
lrate =d−0.5
model·min(step_num−0.5, step _num·warmup _steps−1.5) (3)
This corresponds to increasing the learning rate linearly for the first warmup _steps training steps,
and decreasing it thereafter proportionally to the inverse square root of the step number. We used
warmup _steps = 4000 .
5.4 Regularization
We employ three types of regularization during training:
7Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the
English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.
ModelBLEU Training Cost (FLOPs)
EN-DE EN-FR EN-DE EN-FR
ByteNet [18] 23.75
Deep-Att + PosUnk [39] 39.2 1.0·1020
GNMT + RL [38] 24.6 39.92 2.3·10191.4·1020
ConvS2S [9] 25.16 40.46 9.6·10181.5·1020
MoE [32] 26.03 40.56 2.0·10191.2·1020
Deep-Att + PosUnk Ensemble [39] 40.4 8.0·1020
GNMT + RL Ensemble [38] 26.30 41.16 1.8·10201.1·1021
ConvS2S Ensemble [9] 26.36 41.29 7.7·10191.2·1021
Transformer (base model) 27.3 38.1 3.3·1018
Transformer (big) 28.4 41.8 2.3·1019
Residual Dropout We apply dropout [ 33] to the output of each sub-layer, before it is added to the
sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the
positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of
Pdrop= 0.1.
Label Smoothing During training, we employed label smoothing of value ϵls= 0.1[36]. This
hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
6 Results
6.1 Machine Translation
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big)
in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0
BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is
listed in the bottom line of Table 3. Training took 3.5days on 8P100 GPUs. Even our base model
surpasses all previously published models and ensembles, at a fraction of the training cost of any of
the competitive models.
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0,
outperforming all of the previously published single models, at less than 1/4the training cost of the
previous state-of-the-art model. The Transformer (big) model trained for English-to-French used
dropout rate Pdrop= 0.1, instead of 0.3.
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which
were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We
used beam search with a beam size of 4and length penalty α= 0.6[38]. These hyperparameters
were chosen after experimentation on the development set. We set the maximum output length during
inference to input length + 50, but terminate early when possible [38].
Table 2 summarizes our results and compares our translation quality and training costs to other model
architectures from the literature. We estimate the number of floating point operations used to train a
model by multiplying the training time, the number of GPUs used, and an estimate of the sustained
single-precision floating-point capacity of each GPU5.
6.2 Model Variations
To evaluate the importance of different components of the Transformer, we varied our base model
in different ways, measuring the change in performance on English-to-German translation on the
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.
8Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base
model. All metrics are on the English-to-German translation development set, newstest2013. Listed
perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to
per-word perplexities.
N d model dff h d k dvPdrop ϵlstrain PPL BLEU params
steps (dev) (dev) ×106
base 6 512 2048 8 64 64 0.1 0.1 100K 4.92 25.8 65
(A)1 512 512 5.29 24.9
4 128 128 5.00 25.5
16 32 32 4.91 25.8
32 16 16 5.01 25.4
(B)16 5.16 25.1 58
32 5.01 25.4 60
(C)2 6.11 23.7 36
4 5.19 25.3 50
8 4.88 25.5 80
256 32 32 5.75 24.5 28
1024 128 128 4.66 26.0 168
1024 5.12 25.4 53
4096 4.75 26.2 90
(D)0.0 5.77 24.6
0.2 4.95 25.5
0.0 4.67 25.3
0.2 5.47 25.7
(E) positional embedding instead of sinusoids 4.92 25.7
big 6 1024 4096 16 0.3 300K 4.33 26.4 213
development set, newstest2013. We used beam search as described in the previous section, but no
checkpoint averaging. We present these results in Table 3.
In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions,
keeping the amount of computation constant, as described in Section 3.2.2. While single-head
attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.
In Table 3 rows (B), we observe that reducing the attention key size dkhurts model quality. This
suggests that determining compatibility is not easy and that a more sophisticated compatibility
function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected,
bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our
sinusoidal positional encoding with learned positional embeddings [ 9], and observe nearly identical
results to the base model.
6.3 English Constituency Parsing
To evaluate if the Transformer can generalize to other tasks we performed experiments on English
constituency parsing. This task presents specific challenges: the output is subject to strong structural
constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence
models have not been able to attain state-of-the-art results in small-data regimes [37].
We trained a 4-layer transformer with dmodel = 1024 on the Wall Street Journal (WSJ) portion of the
Penn Treebank [ 25], about 40K training sentences. We also trained it in a semi-supervised setting,
using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences
[37]. We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens
for the semi-supervised setting.
We performed only a small number of experiments to select the dropout, both attention and residual
(section 5.4), learning rates and beam size on the Section 22 development set, all other parameters
remained unchanged from the English-to-German base translation model. During inference, we
9Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23
of WSJ)
Parser Training WSJ 23 F1
Vinyals & Kaiser el al. (2014) [37] WSJ only, discriminative 88.3
Petrov et al. (2006) [29] WSJ only, discriminative 90.4
Zhu et al. (2013) [40] WSJ only, discriminative 90.4
Dyer et al. (2016) [8] WSJ only, discriminative 91.7
Transformer (4 layers) WSJ only, discriminative 91.3
Zhu et al. (2013) [40] semi-supervised 91.3
Huang & Harper (2009) [14] semi-supervised 91.3
McClosky et al. (2006) [26] semi-supervised 92.1
Vinyals & Kaiser el al. (2014) [37] semi-supervised 92.1
Transformer (4 layers) semi-supervised 92.7
Luong et al. (2015) [23] multi-task 93.0
Dyer et al. (2016) [8] generative 93.3
increased the maximum output length to input length + 300. We used a beam size of 21andα= 0.3
for both WSJ only and the semi-supervised setting.
Our results in Table 4 show that despite the lack of task-specific tuning our model performs sur-
prisingly well, yielding better results than all previously reported models with the exception of the
Recurrent Neural Network Grammar [8].
In contrast to RNN sequence-to-sequence models [ 37], the Transformer outperforms the Berkeley-
Parser [29] even when training only on the WSJ training set of 40K sentences.
7 Conclusion
In this work, we presented the Transformer, the first sequence transduction model based entirely on
attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with
multi-headed self-attention.
For translation tasks, the Transformer can be trained significantly faster than architectures based
on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014
English-to-French translation tasks, we achieve a new state of the art. In the former task our best
model outperforms even all previously reported ensembles.
We are excited about the future of attention-based models and plan to apply them to other tasks. We
plan to extend the Transformer to problems involving input and output modalities other than text and
to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs
such as images, audio and video. Making generation less sequential is another research goals of ours.
The code we used to train and evaluate our models is available at https://github.com/
tensorflow/tensor2tensor .
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful
comments, corrections and inspiration.
References
[1]Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450 , 2016.
[2]Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. CoRR , abs/1409.0473, 2014.
[3]Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V . Le. Massive exploration of neural
machine translation architectures. CoRR , abs/1703.03906, 2017.
[4]Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine
reading. arXiv preprint arXiv:1601.06733 , 2016.
10[5]Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,
and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. CoRR , abs/1406.1078, 2014.
[6]Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv
preprint arXiv:1610.02357 , 2016.
[7]Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation
of gated recurrent neural networks on sequence modeling. CoRR , abs/1412.3555, 2014.
[8]Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural
network grammars. In Proc. of NAACL , 2016.
[9]Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu-
tional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2 , 2017.
[10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint
arXiv:1308.0850 , 2013.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im-
age recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition , pages 770–778, 2016.
[12] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in
recurrent nets: the difficulty of learning long-term dependencies, 2001.
[13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation ,
9(8):1735–1780, 1997.
[14] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations
across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural
Language Processing , pages 832–841. ACL, August 2009.
[15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring
the limits of language modeling. arXiv preprint arXiv:1602.02410 , 2016.
[16] Łukasz Kaiser and Samy Bengio. Can active memory replace attention? In Advances in Neural
Information Processing Systems, (NIPS) , 2016.
[17] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference
on Learning Representations (ICLR) , 2016.
[18] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko-
ray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2 ,
2017.
[19] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks.
InInternational Conference on Learning Representations , 2017.
[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.
[21] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint
arXiv:1703.10722 , 2017.
[22] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen
Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint
arXiv:1703.03130 , 2017.
[23] Minh-Thang Luong, Quoc V . Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task
sequence to sequence learning. arXiv preprint arXiv:1511.06114 , 2015.
[24] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-
based neural machine translation. arXiv preprint arXiv:1508.04025 , 2015.
11[25] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated
corpus of english: The penn treebank. Computational linguistics , 19(2):313–330, 1993.
[26] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference ,
pages 152–159. ACL, June 2006.
[27] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention
model. In Empirical Methods in Natural Language Processing , 2016.
[28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive
summarization. arXiv preprint arXiv:1705.04304 , 2017.
[29] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact,
and interpretable tree annotation. In Proceedings of the 21st International Conference on
Computational Linguistics and 44th Annual Meeting of the ACL , pages 433–440. ACL, July
2006.
[30] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv
preprint arXiv:1608.05859 , 2016.
[31] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words
with subword units. arXiv preprint arXiv:1508.07909 , 2015.
[32] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts
layer. arXiv preprint arXiv:1701.06538 , 2017.
[33] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-
nov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine
Learning Research , 15(1):1929–1958, 2014.
[34] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory
networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,
Advances in Neural Information Processing Systems 28 , pages 2440–2448. Curran Associates,
Inc., 2015.
[35] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems , pages 3104–3112, 2014.
[36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.
Rethinking the inception architecture for computer vision. CoRR , abs/1512.00567, 2015.
[37] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In
Advances in Neural Information Processing Systems , 2015.
[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine
translation system: Bridging the gap between human and machine translation. arXiv preprint
arXiv:1609.08144 , 2016.
[39] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with
fast-forward connections for neural machine translation. CoRR , abs/1606.04199, 2016.
[40] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate
shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume
1: Long Papers) , pages 434–443. ACL, August 2013.
12Attention Visualizations
Input-Input Layer5
It
is
in
this
spirit
that
a
majority
of
American
governments
have
passed
new
laws
since
2009
making
the
registration
or
voting
process
more
difficult
.
<EOS>
<pad>
<pad>
<pad>
<pad>
<pad>
<pad>
It
is
in
this
spirit
that
a
majority
of
American
governments
have
passed
new
laws
since
2009
making
the
registration
or
voting
process
more
difficult
.
<EOS>
<pad>
<pad>
<pad>
<pad>
<pad>
<pad>
Figure 3: An example of the attention mechanism following long-distance dependencies in the
encoder self-attention in layer 5 of 6. Many of the attention heads attend to a distant dependency of
the verb ‘making’, completing the phrase ‘making...more difficult’. Attentions here shown only for
the word ‘making’. Different colors represent different heads. Best viewed in color.
13Input-Input Layer5
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>
Input-Input Layer5
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>Figure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top:
Full attentions for head 5. Bottom: Isolated attentions from just the word ‘its’ for attention heads 5
and 6. Note that the attentions are very sharp for this word.
14Input-Input Layer5
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>
Input-Input Layer5
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>
The
Law
will
never
be
perfect
,
but
its
application
should
be
just
-
this
is
what
we
are
missing
,
in
my
opinion
.
<EOS>
<pad>Figure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the
sentence. We give two such examples above, from two different heads from the encoder self-attention
at layer 5 of 6. The heads clearly learned to perform different tasks.
15 |
No dataset card yet
- Downloads last month
- 7