filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
2203.14263.pdf | 1
A General Survey on Attention Mechanisms in
Deep Learning
Gianni Brauwers and Flavius Frasincar
Abstract —Attention is an important mechanism that can be employed for a variety of deep learning models across many different
domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The
various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a
comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed,
and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in
the field of attention models is considered.
Index Terms —Attention models, deep learning, introductory and survey, neural nets, supervised learning
!
1 I NTRODUCTION
THEidea of mimicking human attention first arose in the
field of computer vision [1], [2] in an attempt to reduce
the computational complexity of image processing while
improving performance by introducing a model that would
only focus on specific regions of images instead of the entire
picture. Although, the true starting point of the attention
mechanisms we know today is often attributed to originate
in the field of natural language processing [3]. Bahdanau et
al. [3] implement attention in a machine translation model to
address certain issues with the structure of recurrent neural
networks. After Bahdanau et al. [3] emphasized the advan-
tages of attention, the attention techniques were refined [4]
and quickly became popular for a variety of tasks, such as
text classification [5], [6], image captioning [7], [8], sentiment
analysis [6], [9], and speech recognition [10], [11], [12].
Attention has become a popular technique in deep learn-
ing for several reasons. Firstly, models that incorporate
attention mechanisms attain state-of-the-art results for all
of the previously mentioned tasks, and many others. Fur-
thermore, most attention mechanisms can be trained jointly
with a base model, such as a recurrent neural network or
a convolutional neural network using regular backpropa-
gation [3]. Additionally, attention introduces a certain type
of interpretation into neural network models [8] that are
generally known to be highly complicated to interpret.
Moreover, the popularity of attention mechanisms was ad-
ditionally boosted after the introduction of the Transformer
model [13] that further proved how effective attention can
be. Attention was originally introduced as an extension to
recurrent neural networks [14]. However, the Transformer
model proposed in [13] poses a major development in at-
tention research as it demonstrates that the attention mech-
anism is sufficient to build a state-of-the-art model. This
means that disadvantages, such as the fact that recurrent
neural networks are particularly difficult to parallelize, can
G. Brauwers and F. Frasincar are with the Erasmus School of Economics,
Erasmus University Rotterdam, 3000 DR, Rotterdam, the Netherlands (e-
mail:{frasincar, brauwers}@ese.eur.nl).
Manuscript received July 6, 2020; revised June 21, 2021; Corresponding
author: F. Frasincarbe circumvented. As was the case for the introduction
of the original attention mechanism [3], the Transformer
model was created for machine translation, but was quickly
adopted to be used for other tasks, such as image processing
[15], video processing [16], and recommender systems [17].
The purpose of this survey is to explain the general
form of attention, and provide a comprehensive overview
of attention techniques in deep learning. Other surveys have
already been published on the subject of attention models.
For example, in [18], a survey is presented on attention in
computer vision, [19] provides an overview of attention in
graph models, and [20], [21], [22] are all surveys on attention
in natural language processing. This paper partly builds
on the information presented in the previously mentioned
surveys. Yet, we provide our own significant contributions.
The main difference between this survey and the previously
mentioned ones is that the other surveys generally focus
on attention models within a certain domain. This survey,
however, provides a cross-domain overview of attention
techniques. We discuss the attention techniques in a general
way, allowing them to be understood and applied in a
variety of domains. Furthermore, we found the taxonomies
presented in previous surveys to be lacking the depth and
structure needed to properly distinguish the various atten-
tion mechanisms. Additionally, certain significant attention
techniques have not yet been properly discussed in previ-
ous surveys, while other presented attention mechanisms
seem to be lacking either technical details or intuitive ex-
planations. Therefore, in this paper, we present important
attention techniques by means of a single framework using
a uniform notation, a combination of both technical and in-
tuitive explanations for each presented attention technique,
and a comprehensive taxonomy of attention mechanisms.
The structure of this paper is as follows. Section 2 in-
troduces a general attention model that provides the reader
with a basic understanding of the properties of attention
and how it can be applied. One of the main contributions
of this paper is the taxonomy of attention techniques pre-
sented in Section 3. In this section, attention mechanisms
are explained and categorized according to the presentedarXiv:2203.14263v1 [cs.LG] 27 Mar 2022 |
2210.00312.pdf | Published as a conference paper at ICLR 2023
MULTIMODAL ANALOGICAL REASONING OVER
KNOWLEDGE GRAPHS
Ningyu Zhang1∗Lei Li1∗Xiang Chen1∗Xiaozhuan Liang1Shumin Deng2Huajun Chen1†
1Zhejiang University, AZFT Joint Lab for Knowledge Engine
2National University of Singapore
{zhangningyu,leili21,xiang chen,liangxiaozhuan,231sm,huajunsir }@zju.edu.cn
ABSTRACT
Analogical reasoning is fundamental to human cognition and holds an important
place in various fields. However, previous studies mainly focus on single-modal
analogical reasoning and ignore taking advantage of structure knowledge. No-
tably, the research in cognitive psychology has demonstrated that information
from multimodal sources always brings more powerful cognitive transfer than
single modality sources. To this end, we introduce the new task of multimodal
analogical reasoning over knowledge graphs, which requires multimodal reason-
ing ability with the help of background knowledge. Specifically, we construct
aMultimodal Analogical Reasoning data Set (MARS ) and a multimodal knowl-
edge graph MarKG . We evaluate with multimodal knowledge graph embedding
and pre-trained Transformer baselines, illustrating the potential challenges of the
proposed task. We further propose a novel model-agnostic Multimodal analogical
reasoning framework with Transformer ( MarT ) motivated by the structure map-
ping theory, which can obtain better performance. We hope our work can deliver
benefits and inspire future research1.
1 I NTRODUCTION
Analogical reasoning – the ability to perceive and use relational similarity between two situations
or events – holds an important place in human cognition (Johnson-Laird, 2006; Wu et al., 2020;
Bengio et al., 2021; Chen et al., 2022a) and can provide back-end support for various fields such
as education (Thagard, 1992), creativity (Goel, 1997), thus appealing to the AI community. Early,
Mikolov et al. (2013b); Gladkova et al. (2016a); Ethayarajh et al. (2019a) propose visual analogical
reasoning aiming at lifting machine intelligence in Computer Vision (CV) by associating vision
with relational, structural, and analogical reasoning. Meanwhile, researchers of Natural Language
Processing (NLP) hold the connectionist assumption (Gentner, 1983) of linear analogy (Ethayarajh
et al., 2019b); for example, the relation between two words can be inferred through vector arithmetic
of word embeddings. However, it is still an open question whether artificial neural networks are also
capable of recognizing analogies among different modalities.
Note that humans can quickly acquire new abilities based on finding a common relational system
between two exemplars, situations, or domains. Based on Mayer’s Cognitive Theory of multimedia
learning (Hegarty & Just, 1993; Mayer, 2002), human learners often perform better on tests with
analogy when they have learned from multimodal sources than single-modal sources. Evolving
from recognizing single-modal analogies to exploring multimodal reasoning for neural models, we
emphasize the importance of a new kind of analogical reasoning task with Knowledge Graphs (KGs).
In this paper, we introduce the task of multimodal analogical reasoning over knowledge graphs to fill
this blank. Unlike the previous multiple-choice QA setting, we directly predict the analogical target
and formulate the task as link prediction without explicitly providing relations . Specifically, the
task can be formalized as (eh,et) : (eq,?)with the help of background multimodal knowledge graph
∗Equal contribution and shared co-first authorship.
†Corresponding author.
1Code and datasets are available in https://github.com/zjunlp/MKG_Analogy .
1arXiv:2210.00312v4 [cs.CL] 1 Mar 2023 |
2310.12397.pdf | GPT-4 Doesn’t Know It’s Wrong: An Analysis of
Iterative Prompting for Reasoning Problems
Kaya Stechly∗Matthew Marquez∗Subbarao Kambhampati∗
Abstract
There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples–ranging from multiplication to simple planning, there is still the
wide spread belief that LLMs can self-critique and improve their own solutions in
an iterative fashion. This belief seemingly rests on the assumption that verification
of correctness should be easier than generation–a rather classical argument from
computational complexity, that should be irrelevant to LLMs to the extent what
they are doing is approximate retrieval. In this paper, we set out to systematically
investigate the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring , a canonical NP-complete reasoning problem that is related to proposi-
tional satisfiability as well as practical problems like scheduling and allocation.
We present a principled empirical study of the performance of GPT4 in solving
graph coloring instances or verifying the correctness of candidate colorings–both in
direct and iterative modes. In iterative modes, we experiment both with the model
critiquing its own answers and an external correct reasoner verifying proposed
solutions. In both cases, we analyze whether the content of the criticisms actually
affects bottom line performance. The study seems to indicate that (i) LLMs are bad
at solving graph coloring instances (ii) they are no better at verifying a solution–and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms–whether by LLMs or
external solvers–seems largely irrelevant to the performance of iterative prompting.
We show that the observed effectiveness of LLMs in iterative settings is largely due
to the correct solution being fortuitously present in the top-k completions of the
prompt (and being recognized as such by an external verifier). Our results thus call
into question claims about the self-critiquing capabilities of state of the art LLMs.
1 Introduction
Large Language Models (LLMs), essentially n-gram models on steroids which have been trained on
web-scale language corpus, have caught the imagination of the AI research community with linguistic
behaviors that no one expected text completion systems to possess. Their seeming versatility has lead
many researchers to wonder whether they can also do well on reasoning tasks typically associated
with system 2 competency. Initial excitement based on anecdotal performance of LLMs on reasoning
tasks has dissipated to some extent by the recent spate of studies questioning the robustness of
such behaviors–be it planning [ 17,8], simple arithmetic and logic [ 5], or general mathematical and
abstract benchmark[ 14,6]. There still exists considerable optimism that even if LLMs can’t generate
correct solutions in one go, their accuracy improves in a iterative prompting regime, where LLMs
will be able to "self-critique" their candidate solutions and refine them to the point of correctness
[20,19,15,18,7]. This belief seem to rest largely on the assumption that verification of correctness
∗Arizona State University, Tempe.
Preprint. Under review.arXiv:2310.12397v1 [cs.AI] 19 Oct 2023 |
2309.14322.pdf | Small-scale proxies for large-scale Transformer training instabilities
Mitchell Wortsman Peter J. Liu Lechao Xiao Katie Everett
Alex Alemi Ben Adlam John D. Co-Reyes Izzeddin Gur Abhishek Kumar
Roman Novak Jeffrey Pennington Jascha Sohl-dickstein Kelvin Xu
Jaehoon Lee*Justin Gilmer*Simon Kornblith*
Google DeepMind
Abstract
Teams that have trained large Transformer-based mod-
els have reported training instabilities at large scale
that did not appear when training with the same
hyperparameters at smaller scales. Although the
causes of such instabilities are of scientific interest,
the amount of resources required to reproduce them
has made investigation difficult. In this work, we
seek ways to reproduce and study training stability
and instability at smaller scales. First, we focus on
two sources of training instability described in pre-
vious work: the growth of logits in attention layers
(Dehghani et al., 2023) and divergence of the output
logits from the log probabilities (Chowdhery et al.,
2022). By measuring the relationship between learn-
ing rate and loss across scales, we show that these
instabilities also appear in small models when training
at high learning rates, and that mitigations previously
employed at large scales are equally effective in this
regime. This prompts us to investigate the extent to
which other known optimizer and model interventions
influence the sensitivity of the final loss to changes
in the learning rate. To this end, we study meth-
ods such as warm-up, weight decay, and the µParam
(Yang et al., 2022), and combine techniques to train
small models that achieve similar losses across orders
of magnitude of learning rate variation. Finally, to
conclude our exploration we study two cases where
instabilities can be predicted before they emerge by
examining the scaling behavior of model activation
and gradient norms.
1 Introduction
Scaling up transformers has led to remarkable progress
from chat models to image generation. However, not
104
103
102
101
100
Learning rate2.502.753.003.253.503.754.004.25Final eval loss
qk-layernorm = True
qk-layernorm = FalseN = 2.4e+06
N = 9.4e+06
N = 1.9e+07
N = 4.2e+07
N = 8.5e+07
N = 1.5e+08
N = 3.0e+08
N = 1.2e+09
107108109
Number of parameters102
101
100LR sensitivityFigure 1: Qk-layernorm [ 11] enables stable training across
three orders of magnitude of learning rate (LR) variation.
(Top) For transformers with Nparameters, we plot the
effect of learning rate on final evaluation loss. (Bottom)
We use LR sensitivity to summarize the top plot. LR sensi-
tivity measures the expected deviation from optimal when
varying learning rate across three orders of magnitude.
Qk-layernorm reduces LR sensitivity, but LR sensitivity
still increases with model scale.
1arXiv:2309.14322v1 [cs.LG] 25 Sep 2023 |
2308.05660.pdf | Thermodynamic Linear Algebra
Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon,
Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles
Normal Computing Corporation, New York, New York, USA
Linear algebraic primitives are at the core of many modern algorithms in engineering, science, and
machine learning. Hence, accelerating these primitives with novel computing hardware would have
tremendous economic impact. Quantum computing has been proposed for this purpose, although
the resource requirements are far beyond current technological capabilities, so this approach remains
long-term in timescale. Here we consider an alternative physics-based computing paradigm based
on classical thermodynamics, to provide a near-term approach to accelerating linear algebra.
At first sight, thermodynamics and linear algebra seem to be unrelated fields. In this work, we
connect solving linear algebra problems to sampling from the thermodynamic equilibrium distri-
bution of a system of coupled harmonic oscillators. We present simple thermodynamic algorithms
for (1) solving linear systems of equations, (2) computing matrix inverses, (3) computing matrix
determinants, and (4) solving Lyapunov equations. Under reasonable assumptions, we rigorously
establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly
in matrix dimension. Our algorithms exploit thermodynamic principles like ergodicity, entropy,
and equilibration, highlighting the deep connection between these two seemingly distinct fields, and
opening up algebraic applications for thermodynamic computing hardware.
I. Introduction
Basic linear algebra primitives such as solving a linear system of the form Ax=band obtaining the
inverse of a matrix are present in many modern algorithms. Such primitives are relevant to a multitude
of applications, including for example optimal control of dynamic systems and resource allocation. They
are also a common subroutine of many artificial intelligence (AI) algorithms, and account for a substantial
portion of the time and energy costs in some cases.
The most common method to perform these primitives is LU decomposition, whose time-complexity
scales as O(d3). Many proposals have been made to accelerate such primitives, for example using iterative
methods such as the conjugate gradient method. In the last decade, these primitives have been accelerated
by hardware improvements, notably by their implementation on graphical processing units (GPUs), fueling
massive parallelization. However, the scaling of these methods is still a prohibitive factor, and obtaining
a good approximate solution to a dense matrix of more than a few tens of thousand dimensions remains
challenging.
Exploiting physics to solve mathematical problems is a deep idea, with much focus on solving optimization
problems [1–3]. In the context of linear algebra, much attention has been paid to quantum computers [4],
since the mathematics of discrete-variable quantum mechanics matches that of linear algebra. A quantum
algorithm [5] to solve linear systems has been proposed, which for sparse and well-conditioned matrices
scales as logd. However, the resource requirements [6] for this algorithm are far beyond current hardware
capabilities. More generally building large-scale quantum hardware has remained difficult [7], and variational
quantum algorithms for linear algebra [8–10] have battled with vanishing gradient issues [11–13].
Therefore, the search for alternative hardware proposals that can exploit physical dynamics to accelerate
linear algebra primitives has been ongoing. Notably, memristor crossbar arrays have been of interest for
accelerating matrix-vector multiplications [14, 15]. Solving linear systems has also been the subject of
analog computing approaches [16].
Recently, we defined a new class of hardware, built from stochastic, analog building blocks, which is
ultimately thermodynamic in nature [17]. (See also probabilistic-bit computers [18–20] and thermodynamic
neural networks [21–24] for alternative approaches to thermodynamic computing [25]). AI applications like
generative modeling are a natural fit for this thermodynamic hardware, where stochastic fluctuations are
exploited to generate novel samples.
In this work, we surprisingly show that the same thermodynamic hardware from Ref. [17] can also be used
toacceleratekeyprimitivesinlinearalgebra. Thermodynamicsisnottypicallyassociatedwithlinearalgebra,
and connecting these two fields is therefore non-trivial. Here, we exploit the fact that the mathematics of
harmonic oscillator systems is inherently affine (i.e., linear), and hence we can map linear algebraic primitives
onto such systems. (See also Ref. [26] for a discussion of harmonic oscillators in the context of quantum
computingspeedups.) Weshowthatsimplybysamplingfromthethermalequilibriumdistributionofcoupled
harmonic oscillators, one can solve a variety of linear algebra problems.arXiv:2308.05660v1 [cond-mat.stat-mech] 10 Aug 2023 |
2309.10150.pdf | Q-Transformer: Scalable Offline Reinforcement
Learning via Autoregressive Q-Functions
Yevgen Chebotar∗, Quan Vuong∗, Alex Irpan, Karol Hausman, Fei Xia, Yao Lu, Aviral Kumar,
Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum,
Sumedh Sontakke, Grecia Salazar, Huong T Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath,
Jaspiar Singht, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, Sergey Levine
Google DeepMind
Abstract: In this work, we present a scalable reinforcement learning method for
training multi-task policies from large offline datasets that can leverage both hu-
man demonstrations and autonomously collected data. Our method uses a Trans-
former to provide a scalable representation for Q-functions trained via offline tem-
poral difference backups. We therefore refer to the method as Q-Transformer.
By discretizing each action dimension and representing the Q-value of each ac-
tion dimension as separate tokens, we can apply effective high-capacity sequence
modeling techniques for Q-learning. We present several design decisions that en-
able good performance with offline RL training, and show that Q-Transformer
outperforms prior offline RL algorithms and imitation learning techniques on a
large diverse real-world robotic manipulation task suite. The project’s website
and videos can be found at qtransformer.github.io
1 Introduction
Human demonstrationsAutonomousdata
Conservative regularizationAutoregressive Q-learningMonte-Carlo returnsMixed quality data
environment stepaction dimension
……Q-values per action dimensionQ-Transformer
Figure 1: Q-Transformer enables training high-
capacity sequential architectures on mixed qual-
ity data. Our policies are able to improve upon
human demonstrations and execute a variety of
manipulation tasks in the real world.Robotic learning methods that incorporate large
and diverse datasets in combination with high-
capacity expressive models, such as Transform-
ers [1, 2, 3, 4, 5, 6], have the potential to acquire
generalizable and broadly applicable policies that
perform well on a wide variety of tasks [1, 2].
For example, these policies can follow natural
language instructions [4, 7], perform multi-stage
behaviors [8, 9], and generalize broadly across
environments, objects, and even robot morpholo-
gies [10, 3]. However, many of the recently pro-
posed high-capacity models in the robotic learn-
ing literature are trained with supervised learn-
ing methods. As such, the performance of the re-
sulting policy is limited by the degree to which
human demonstrators can provide high-quality
demonstration data. This is limiting for two rea-
sons. First, we would like robotic systems that
aremore proficient than human teleoperators, ex-
ploiting the full potential of the hardware to per-
form tasks quickly, fluently, and reliably. Second,
we would like robotic systems that get better with
autonomously gathered experience, rather than
relying entirely on high-quality demonstrations.
Reinforcement learning in principle provides
both of these capabilities. A number of promising recent advances demonstrate the successes of
large-scale robotic RL in varied settings, such as robotic grasping and stacking [11, 12], learning
heterogeneous tasks with human-specified rewards [13], learning multi-task policies [14, 15], learn-
ing goal-conditioned policies [16, 17, 18, 19], and robotic navigation [20, 21, 22, 23, 24]. However,
∗Equal contribution.
Corresponding emails: chebotar@google.com, quanhovuong@google.com .
7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.arXiv:2309.10150v2 [cs.RO] 17 Oct 2023 |
2109.01652.pdf | Published as a conference paper at ICLR 2022
FINETUNED LANGUAGE MODELS AREZERO-SHOT
LEARNERS
Jason Wei∗, Maarten Bosma∗, Vincent Y. Zhao∗, Kelvin Guu∗, Adams Wei Yu,
Brian Lester, Nan Du, Andrew M. Dai, and Quoc V . Le
Google Research
ABSTRACT
This paper explores a simple method for improving the zero-shot learning abilities
of language models. We show that instruction tuning —finetuning language models
on a collection of datasets described via instructions—substantially improves zero-
shot performance on unseen tasks.
We take a 137B parameter pretrained language model and instruction tune it on
over 60 NLP datasets verbalized via natural language instruction templates. We
evaluate this instruction-tuned model, which we call FLAN, on unseen task types.
FLAN substantially improves the performance of its unmodified counterpart and
surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even
outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC,
OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning
datasets, model scale, and natural language instructions are key to the success of
instruction tuning.
TargetInput (Commonsense Reasoning)
keep stack of pillow cases in fridgeInference on unseen task typeFinetune on many tasks (“instruction-tuning”)
…Translate this sentence to Spanish: The new office building was built in less than three months.Input (Translation)
El nuevo edificio de oficinas se construyó en tres meses.TargetInput (Natural Language Inference)
It is not possible to tellFLAN ResponseCoreference resolution tasksSentiment analysis tasksGPT-3 175B zero shotGPT-3 175B few-shotFLAN 137B zero-shotPerformance on unseen task typesNatural language inference42.953.256.2Reading Comprehension63.772.677.4Closed-Book QA49.855.756.6Here is a goal: Get a cool sleep on summer days. How would you accomplish this goal? OPTIONS: -Keep stack of pillow cases in fridge. -Keep stack of pillow cases in oven.Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis? OPTIONS: -yes -it is not possible to tell -no
Figure 1: Top: overview of instruction tuning and FLAN. Instruction tuning finetunes a pretrained
language model on a mixture of tasks phrased as instructions. At inference time, we evaluate on
an unseen task type; for instance, we could evaluate the model on natural language inference (NLI)
when no NLI tasks were seen during instruction tuning. Bottom: performance of zero-shot FLAN,
compared with zero-shot and few-shot GPT-3, on three unseen task types where instruction tuning
improved performance substantially out of ten we evaluate. NLI datasets: ANLI R1–R3, CB, RTE.
Reading comprehension datasets: BoolQ, MultiRC, OBQA. Closed-book QA datasets: ARC-easy,
ARC-challenge, NQ, TriviaQA.
∗Lead contributors. Author contributions listed at end of paper.
1arXiv:2109.01652v5 [cs.CL] 8 Feb 2022 |
1610.06258.pdf | Using Fast Weights to Attend to the Recent Past
Jimmy Ba
University of Toronto
jimmy@psi.toronto.eduGeoffrey Hinton
University of Toronto and Google Brain
geoffhinton@google.com
Volodymyr Mnih
Google DeepMind
vmnih@google.comJoel Z. Leibo
Google DeepMind
jzl@google.comCatalin Ionescu
Google DeepMind
cdi@google.com
Abstract
Until recently, research on artificial neural networks was largely restricted to sys-
tems with only two types of variable: Neural activities that represent the current
or recent input and weights that learn to capture regularities among inputs, outputs
and payoffs. There is no good reason for this restriction. Synapses have dynam-
ics at many different time-scales and this suggests that artificial neural networks
might benefit from variables that change slower than activities but much faster
than the standard weights. These “fast weights” can be used to store temporary
memories of the recent past and they provide a neurally plausible way of imple-
menting the type of attention to the past that has recently proved very helpful in
sequence-to-sequence models. By using fast weights we can avoid the need to
store copies of neural activity patterns.
1 Introduction
Ordinary recurrent neural networks typically have two types of memory that have very different time
scales, very different capacities and very different computational roles. The history of the sequence
currently being processed is stored in the hidden activity vector, which acts as a short-term memory
that is updated at every time step. The capacity of this memory is O(H)whereHis the number
of hidden units. Long-term memory about how to convert the current input and hidden vectors into
the next hidden vector and a predicted output vector is stored in the weight matrices connecting the
hidden units to themselves and to the inputs and outputs. These matrices are typically updated at the
end of a sequence and their capacity is O(H2) + O(IH) + O(HO)whereIandOare the numbers
of input and output units.
Long short-term memory networks [Hochreiter and Schmidhuber, 1997] are a more complicated
type of RNN that work better for discovering long-range structure in sequences for two main reasons:
First, they compute increments to the hidden activity vector at each time step rather than recomputing
the full vector1. This encourages information in the hidden states to persist for much longer. Second,
they allow the hidden activities to determine the states of gates that scale the effects of the weights.
These multiplicative interactions allow the effective weights to be dynamically adjusted by the input
or hidden activities via the gates. However, LSTMs are still limited to a short-term memory capacity
ofO(H)for the history of the current sequence.
Until recently, there was surprisingly little practical investigation of other forms of memory in recur-
rent nets despite strong psychological evidence that it exists and obvious computational reasons why
it was needed. There were occasional suggestions that neural networks could benefit from a third
form of memory that has much higher storage capacity than the neural activities but much faster
dynamics than the standard slow weights. This memory could store information specific to the his-
tory of the current sequence so that this information is available to influence the ongoing processing
1This assumes the “remember gates ” of the LSTM memory cells are set to one.arXiv:1610.06258v3 [stat.ML] 5 Dec 2016 |
sciadv.adn0042.pdf | Hikichi et al., Sci. Adv. 10, eadn0042 (2024) 1 March 2024
Science Adv AnceS | ReSeAR cH AR ticle
1 of 20VIROLOGY
Epistatic pathways can drive HIV- 1 escape from
integrase strand transfer inhibitors
Yuta Hikichi1, Jonathan R. Grover2, Alicia Schäfer2, Walther Mothes2, Eric O. Freed1*
People living with human immunodeficiency virus (HIV) receiving integrase strand transfer inhibitors (INSTIs)
have been reported to experience virological failure in the absence of resistance mutations in integrase. To elucidate
INSTI resistance mechanisms, we propagated HIV- 1 in the presence of escalating concentrations of the INSTI
dolutegravir. HIV- 1 became resistant to dolutegravir by sequentially acquiring mutations in the envelope glyco -
protein (Env) and the nucleocapsid protein. The selected Env mutations enhance the ability of the virus to spread
via cell- cell transfer, thereby increasing the multiplicity of infection (MOI). While the selected Env mutations confer
broad resistance to multiple classes of antiretrovirals, the fold resistance is ~2 logs higher for INSTIs than for other
classes of drugs. We demonstrate that INSTIs are more readily overwhelmed by high MOI than other classes of
antiretrovirals. Our findings advance the understanding of how HIV- 1 can evolve resistance to antiretrovirals,
including the potent INSTIs, in the absence of drug- target gene mutations.
INTRODUCTION
Six classes of antiretrovirals (ARVs) have been approved for clinical
use by the US Food and Drug Administration: nucleoside reverse
transcriptase (RT) inhibitors (NRTIs), nonnucleoside RT inhibitors
(NNRTIs), integrase strand transfer inhibitors (INSTIs), protease
inhibitors (PIs), entry inhibitors, and a recently approved capsid
inhibitor, lenacapavir (LEN) (1 , 2). Combination antiretroviral therapy
(cART) has markedly reduced human immunodeficiency virus
(HIV)–associated morbidity and mortality. However, resistance to
ARVs does arise in some people living with HIV (PLWH), often
associated with poor adherence, use of suboptimal drug regimens,
and/or lack of viral load monitoring, particularly in poorly re-
sourced areas (3). In most cases, drug resistance is caused by muta-
tions in the genes targeted by the drugs, often by interfering with the
interaction between the drug and the viral target (3). Thus, in the
clinical setting, drug resistance monitoring is largely focused on
drug- target genes. Recently approved ARVs have been developed
with the aim of overcoming resistant variants observed in the clinic.
For example, second- generation INSTIs, such as dolutegravir (DTG)
and bictegravir (BIC), show some efficacy against IN mutants that
are resistant to first- generation INSTIs like raltegravir (RAL) (4).
These second- generation INSTIs also exhibit higher genetic barriers
to resistance compared to the first- generation INSTIs and RT in-
hibitors ( 5). At present, regimens containing DTG are therefore rec-
ommended as the preferred first- line regimen for most PLWH (6).
Retroviral integration requires two enzymatic reactions catalyzed
by IN: 3′ - end processing, during which the enzyme cleaves two
nucleotides from the 3 ′ ends of the newly synthesized linear viral
DNA, and DNA strand transfer, which entails the insertion of the
viral DNA ends into host cell target DNA. The integration reaction
takes place in a macromolecular complex known as the intasome,
which comprises an IN multimer and the two viral DNA ends (4).
INSTIs inhibit the strand transfer reaction by binding IN and the
viral DNA ends in the intasome and chelating the Mg++ ions required for IN catalytic activity (4 ). Five INSTIs are currently
approved for clinical use: two “first- generation” INSTIs, RAL and
elvitegravir (EVG), and three “second- generation” INSTIs, DTG,
BIC, and cabotegravir (CAB).
Despite the predominant role of drug- target gene mutations in
HIV- 1 drug resistance, mutations outside drug- target genes can
contribute to drug resistance. Particularly in the case of PIs and
INSTIs, some PLWH experience virological failure in the absence
of mutations in the target genes (7 –11). Mutations in Gag and
the envelope glycoprotein (Env) have been implicated in PI resist-
ance ( 12, 13). In vitro studies have reported that mutations in the
3′polypurine tract (3′ PPT) reduce the susceptibility of HIV- 1 to
INSTIs (14–16). 3′PPT mutations may lead to the accumulation of
unintegrated 1- LTR circles that can support the expression of viral
proteins (14, 16) particularly in cell lines that express HTLV- 1 Tax
(14). Wijting et al . (11) reported a distinct set of mutations in the
3′PPT from a patient failing DTG monotherapy in the absence of
INSTI resistance mutations in IN. However, in other studies, these
in vivo–derived 3′ PPT mutations were found not to confer resistance
to INSTIs in vitro (17). It is therefore still unclear whether, or to
what extent, 3′PPT mutations contribute to INSTI resistance in vivo.
Nevertheless, as more potent inhibitors with higher genetic barriers
to resistance are developed, unconventional drug resistance pathways
will become important to consider.
The Env glycoproteins play a central role in HIV- 1 entry and
immune evasion. Env exists as a metastable trimer of three pro-
tomers comprising gp120 and gp41 heterodimers on the surface of
the virion and the infected cell. The binding of gp120 to CD4 on the
target cell triggers conformational rearrangement of the Env trimer
that exposes coreceptor (CCR5 or CXCR4) binding sites in gp120.
Subsequent binding of gp120 to coreceptor promotes insertion of
the gp41 fusion peptide into the target cell membrane, and the
refolding of gp41 heptad repeat 1 and 2 (HR1 and HR2) mediates
the fusion of viral and cellular membranes, allowing viral entry into
the cytosol of the target cell (18). Single- molecule Förster resonance
energy transfer (smFRET) analysis has demonstrated that the Env
trimer spontaneously transitions between at least three distinct pre-
fusion conformations: state 1 (pretriggered, closed conformation),
state 2 (necessary, intermediate conformation), and state 3 (fully 1virus- cell interaction Section, Hiv dynamics and Replication Program, center for
cancer Research, national c ancer i nstitute, Frederick, Md , USA. 2department of
Microbial Pathogenesis, Yale University School of Medicine, new Haven, ct , USA.
*corresponding author. email: efreed@ mail. nih. govcopyright © 2024 the
Authors, some rights
reserved; exclusive
licensee American
Association for the
Advancement of
Science. no claim to
original U.S.
Government Works.
distributed under a
creative c ommons
Attribution
nonc ommercial
license 4.0 ( cc BY- nc ).
Downloaded from https://www.science.org on March 26, 2024
|
10.1016.j.cell.2023.12.034.pdf | Leading Edge
Commentary
Enabling structure-based drug discovery
utilizing predicted models
Edward B. Miller,1,*Howook Hwang,1Mee Shelley,2Andrew Placzek,2Joa˜o P.G.L.M. Rodrigues,1Robert K. Suto,3
Lingle Wang,1Karen Akinsanya,1and Robert Abel1
1Schro ¨dinger New York, 1540 Broadway, 24th Floor, New York, NY 10036, USA
2Schro ¨dinger Portland, 101 SW Main Street, Suite 1300, Portland, OR 97204, USA
3Schro ¨dinger Framingham, 200 Staples Drive, Suite 210, Framingham, MA 01702, USA
*Correspondence: ed.miller@schrodinger.com
https://doi.org/10.1016/j.cell.2023.12.034
High-quality predicted structures enable structure-based approaches to an expanding number of drug dis-
covery programs. We propose that by utilizing free energy perturbation (FEP), predicted structures can be
confidently employed to achieve drug design goals. We use structure-based modeling of hERG inhibition
to illustrate this value of FEP.
Introduction
Traditional structure-based drug design
offers a rational basis to guide the discov-ery of novel chemical matter. Combined
with the apparent success of structure-
prediction methodology (AlphaFold, Ro-seTTAFold, et al.), the domain of applica-
bility of structure-based drug design
would, at first glance, appear to havedramatically increased due to the suddenavailability of seemingly high-fidelity pre-
dicted structures for any protein seq-
uence. However, preliminary evidencesuggests that AlphaFold struggles to reli-
ably generate experimentally observed
alternative protein conformations.
1Cru-
cially, the utility of these predicted struc-
tures for atomistic modeling and drug
design must be scrutinized before theycan be deployed in lieu of experimental
structures.
The most direct measurement of a pre-
dicted structure’s accuracy is how well itmatches a later solved experimental stru-
cture. This metric is crucial for assessing
the performance of structure predictionmethods, but within the realm of drug dis-
covery, the relevance and value of pre-
dicted protein structure models is directlyrelated to their impact on drug design out-
comes. Multiple atomic resolution struc-
tures, both predicted and experimental,can be used to rationally optimize molec-ular properties, such as on-target po-
tency, off-target potency, and absorption,
distribution, metabolism, excretion, andtoxicity (ADMET) properties. In this Com-
mentary, we explore how predicted struc-tures can be confidently applied to these
drug design challenges. We focus on
free energy perturbation, a computationalassay, to quantify the accuracy of pre-
dicted structures for these purposes.
Motivations for structure prediction
A structure is most useful when it is of the
protein target in the therapeutically rele-
vant state. The challenge with structure-based drug design is being able to obtain
the right structure in the disease-relevant
state bound with project chemical matter.As an example, we point to the experi-
mental structural biology pursuits around
the leucine-rich repeat kinase 2 (LRRK2).Mutants of LRRK2 have been implicated
in Parkinson’s disease. Structures have
been obtained of inactive LRRK2 with-out an inhibitor, as a monomer (PDB:7LHW), and as a dimer (PDB: 7LHT), as
well as the G2019S mutant (PDB: 7LI3).
Later, an active type 1 inhibitor boundstructure was published (PDB: 8TXZ) as
well as an inactive state with a type 2 in-
hibitor (PDB: 8TZE). Functionally, LRRK2is associated with cellular trafficking,
and a structure of microtubule-bound
LRRK2 was also recently published(PDB: 7THY). Generally, the demand fora protein structure in various physiologi-
cally relevant structural and dynamical
states outpaces the supply.
From a structure prediction perspec-
tive, numerous publications have offered
approaches to bias or to explore multiplereceptor states as part of structure pre-
diction.
2,3Under favorable conditions, alimited number of predicted structures
are presented to the chemist, who must
then decide which model or models areworthy of committing resources toward.
This is not a trivial commitment—the
expectation is that a predicted structureshould precede, if not outright replace,
an experimental structure. Therefore, if a
predicted structure is considered accu-rate, it should drive consequential deci-sions, among them which compounds to
pursue for costly synthesis and to provide
a clear, ideally quantitative rationale asto why.
Any predicted structure must be judged
by its fidelity to reality. Rather than focuson measures of the geometric agreement
with some future experimental structure,
we propose here that a more meaningfulquestion is to ask the extent to which
the predicted structure can be used to
model existing structure-activity relation-ships. The expectation is that a modelthat can recapitulate a known structure-
activity relationship (SAR) is qualified to
make predictions for novel compoundsand to drive synthesis of those com-
pounds in response to predicted binding
affinity.
While a large number of methods
ranging from knowledge-based machine
learning to physics-based simulationshave shown promises in predicting pro-tein-ligand binding free energies,
4we
will focus on the application of one of
the most extensively and broadly vali-dated methods, free energy perturbation
(FEP), to evaluate a model’s ability to
ll
Cell 187, February 1, 2024 ª2024 Elsevier Inc. 521 |
1805.02867.pdf | arXiv:1805.02867v2 [cs.PF] 28 Jul 2018Online normalizer calculation for softmax
Maxim Milakov
NVIDIA
mmilakov@nvidia.comNatalia Gimelshein
NVIDIA
ngimelshein@nvidia.com
Abstract
The Softmax function is ubiquitous in machine learning, mul tiple previous works
suggested faster alternatives for it. In this paper we propo se a way to compute
classical Softmax with fewer memory accesses and hypothesi ze that this reduction
in memory accesses should improve Softmax performance on ac tual hardware.
The benchmarks confirm this hypothesis: Softmax accelerate s by up to 1.3x and
Softmax+TopK combined and fused by up to 5x.
1 Introduction
Neural networks models are widely used for language modelin g, for tasks such as machine transla-
tion [1] and speech recognition [2]. These models compute wo rd probabilities taking into account
the already generated part of the sequence. The probabiliti es are usually computed by a Projection
layer, which "projects" hidden representation into the out put vocabulary space, and a following Soft-
max function, which transforms raw logits into the the vecto r of probabilities. Softmax is utilized
not only for neural networks, for example, it is employed in m ultinomial logistic regression [3].
A number of previous works suggested faster alternatives to compute word probabilities. Differenti-
ated Softmax [4] and SVD-Softmax [5] replace the projection layer - which is usually just a matrix
multiplication - with more computationally efficient alter natives. Multiple variants of Hierarchical
Softmax [6, 7, 8] split a single Projection+Softmax pair int o multiple much smaller versions of these
two functions organized in tree-like structures. Sampled- based approximations, such as Importance
Sampling [9], Noise Contrastive Estimation [10], and Black out [11] accelerate training by running
Softmax on select elements of the original vector. Finally, Self-Normalized Softmax [12] augments
the objective function to make the softmax normalization te rm close to 1(and skip computing it
during inference).
This is not an exhaustive list, but, hopefully, a representa tive one. Almost all of the approaches
still need to run the original Softmax function, either on fu ll vector or reduced one. There are
two exceptions that don’t need to compute the softmax normal ization term: training with Noise
Contrastive Estimation and inference with Self-Normalize d Softmax. All others will benefit from
the original Softmax running faster.
To the best of our knowledge there has been no targeted effort s to improve the performance of the
original Softmax function. We tried to address this shortco ming and figured out a way to compute
Softmax with fewer memory accesses. We benchmarked it to see if those reductions in memory
accesses translate into performance improvements on a real hardware.
Preprint. Work in progress. |
10.1101.2024.01.02.573943.pdf | De Novo Atomic Protein Structure Modeling for Cryo-EM
Density Maps Using 3D Transformer and Hidden Markov
Model
Nabin Giri1,2and Jianlin Cheng1,2*
1Electrical Engineering and Computer Science, University of Missouri, Columbia, 65211,
Missouri, USA.
2NextGen Precision Health Institute, University of Missouri, Columbia, 65211, Missouri,
USA.
*Corresponding author(s). E-mail(s): chengji@missouri.edu;
Contributing authors: ngzvh@missouri.edu;
Abstract
Accurately building three-dimensional (3D) atomic structures from 3D cryo-electron microscopy (cryo-
EM) density maps is a crucial step in the cryo-EM-based determination of the structures of protein
complexes. Despite improvements in the resolution of 3D cryo-EM density maps, the de novo con-
version of density maps into 3D atomic structures for protein complexes that do not have accurate
homologous or predicted structures to be used as templates remains a significant challenge. Here,
we introduce Cryo2Struct, a fully automated ab initio cryo-EM structure modeling method that uti-
lizes a 3D transformer to identify atoms and amino acid types in cryo-EM density maps first, and
then employs a novel Hidden Markov Model (HMM) to connect predicted atoms to build backbone
structures of proteins. Tested on a standard test dataset of 128 cryo-EM density maps with varying
resolutions (2.1 - 5.6 ˚A) and different numbers of residues (730 - 8,416), Cryo2Struct built substan-
tially more accurate and complete protein structural models than the widely used ab initio method
- Phenix in terms of multiple evaluation metrics. Moreover, on a new test dataset of 500 recently
released density maps with varying resolutions (1.9 - 4.0 ˚A) and different numbers of residues (234
- 8,828), it built more accurate models than on the standard dataset. And its performance is rather
robust against the change of the resolution of density maps and the size of protein structures.
Keywords: cryo-EM, atomic protein structure modeling, deep learning, transformer, Hidden Markov Model
1 Introduction
Determining the three-dimensional (3D) atomic
structures of macromolecules, such as protein
complexes and assemblies [1–3], is fundamental
in structural biology. The 3D arrangement ofatoms provides essential insights into the mecha-
nistic understanding of molecular function of pro-
teins [4]. In recent years, cryo-electron microscopy
(cryo-EM) [5] has emerged as a key technol-
ogy for experimentally determining the structures
of large protein complexes and assemblies. How-
ever, modeling atomic protein structures from
1. CC-BY 4.0 International license made available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted January 2, 2024. ; https://doi.org/10.1101/2024.01.02.573943doi: bioRxiv preprint |
score-matching-denoising.pdf | 1
A Connection Between Score Matching
and Denoising Autoencoders
Pascal Vincent
vincentp@iro.umontreal.ca
Dept. IRO, Université de Montréal,
CP 6128, Succ. Centre-Ville, Montréal (QC) H3C 3J7, Canada.
Technical Report 1358
Département d’Informatique et de Recherche Opérationnelle
December 2010
THIS IS A PREPRINT VERSION OF A NOTE THAT HAS BEEN
ACCEPTED FOR PUBLICATION IN NEURAL COMPUTATION.
Keywords: autoencoder, energy based models, score matching, denoising, density
estimation.
Abstract
Denoising autoencoders have been previously shown to be competitive alternatives
to Restricted Boltzmann Machines for unsupervised pre-training of each layer of a deep
architecture. We show that a simple denoising autoencoder training criterion is equiv-
alent to matching the score (with respect to the data) of a specific energy based model
to that of a non-parametric Parzen density estimator of the data. This yields several
useful insights. It defines a proper probabilistic model for the denoising autoencoder
technique which makes it in principle possible to sample from them or to rank examples
by their energy. It suggests a different way to apply score matching that is related to
learning to denoise and does not require computing second derivatives. It justifies the
use of tied weights between the encoder and decoder, and suggests ways to extend the
success of denoising autoencoders to a larger family of energy-based models.
1 Introduction
This note uncovers an unsuspected link between the score matching technique (Hyväri-
nen, 2005; Hyvärinen, 2008) for learning the parameters of unnormalized density mod-
els over continuous-valued data, and the training of denoising autoencoders (Vincent
et al. , 2008, 2010).
Score matching (SM) is an alternative to the maximum likelihood principle suitable
for unnormalized probability density models whose partition function is intractable. Its |
2202.08371.pdf | arXiv:2202.08371v1 [cs.LG] 15 Feb 2022THE QUARKS OF ATTENTION
PIERRE BALDI AND ROMAN VERSHYNIN
Abstract. Attention plays a fundamental role in both natural and artifi cial intelligence
systems. In deep learning, attention-based neural archite ctures, such as transformer archi-
tectures, are widely used to tackle problems in natural lang uage processing and beyond.
Here we investigate the fundamental building blocks of atte ntion and their computational
properties. Within the standard model of deep learning, we c lassify all possible fundamental
building blocks of attention in terms of their source, targe t, and computational mechanism.
We identify and study three most important mechanisms: addi tive activation attention, mul-
tiplicative output attention (output gating), and multipl icative synaptic attention (synaptic
gating). The gating mechanisms correspond to multiplicati ve extensions of the standard
model and are used across all current attention-based deep l earning architectures. We study
their functional properties and estimate the capacity of se veral attentional building blocks
in the case of linear and polynomial threshold gates. Surpri singly, additive activation atten-
tion plays a central role in the proofs of the lower bounds. At tention mechanisms reduce
the depth of certain basic circuits and leverage the power of quadratic activations without
incurring their full cost.
Keywords: neural networks; attention; transformers; capacity; comp lexity; deep learning.
Contents
1. Introduction 2
2. Sytematic Identification of Attention Quarks: Within and Beyond the Standard
Model 3
3. All you Need is Gating: Transformers 10
4. Functional Aspects of Attention 11
5. Cardinal Capacity Review 16
6. Capacity of Single Unit Attention 20
7. Capacity of Attention Layers 28
8. Conclusion 31
9. Appendix: Detailed Proof of Theorem 6.5 33
Acknowledgment 36
References 36
“Everyone knows what attention is... It is the taking possess ion by the mind in clear
and vivid form, of one out of what seem several simultaneousl y possible objects or trains of
thought...” William James, Principles of Psychology (1890).
Date : February 18, 2022.
1 |
2404.12358.pdf | Preprint
From rtoQ∗: Your Language Model is Secretly a Q-Function
Rafael Rafailov*
Stanford University
rafailov@stanford.eduJoey Hejna*
Stanford University
jhejna@stanford.eduRyan Park
Stanford University
rypark@stanford.edu
Chelsea Finn
Stanford University
cbfinn@stanford.edu
Abstract
Reinforcement Learning From Human Feedback (RLHF) has been a critical
to the success of the latest generation of generative AI models. In response
to the complex nature of the classical RLHF pipeline, direct alignment
algorithms such as Direct Preference Optimization (DPO) have emerged as
an alternative approach. Although DPO solves the same objective as the
standard RLHF setup, there is a mismatch between the two approaches.
StandardRLHFdeploysreinforcementlearninginaspecifictoken-levelMDP,
while DPO is derived as a bandit problem in which the whole response of the
model is treated as a single arm. In this work we rectify this difference, first
we theoretically show that we can derive DPO in the token-level MDP as a
general inverse Q-learning algorithm, which satisfies the Bellman equation.
Using our theoretical results, we provide three concrete empirical insights.
First, we show that because of its token level interpretation, DPO is able to
perform some type of credit assignment. Next, we prove that under the token
level formulation, classical search-based algorithms, such as MCTS, which
have recently been applied to the language generation space, are equivalent
to likelihood-based search on a DPO policy. Empirically we show that a
simple beam search yields meaningful improvement over the base DPO
policy. Finally, we show how the choice of reference policy causes implicit
rewards to decline during training. We conclude by discussing applications of
our work, including information elicitation in multi-tun dialogue, reasoning,
agentic applications and end-to-end training of multi-model systems.
1 Introduction
Reinforcement Learning from Human Feedback (RLHF) has become the defacto method for
aligning large language models (LLMs) with human intent due to its success in a wide range
of applications from summarization (Stiennon et al., 2022) to instruction following (Ouyang
et al., 2022). By learning a reward function from human-labeled comparisons, RLHF is able
to capture complex objectives that are in-describedable in practice. Following the success
of (Ziegler et al., 2020), numerous works have considered new algorithms for training and
sampling from large models in various domains using techniques from reinforcement learning
(RL). In particular direct alignment methods, such as Direct Preference Optimization (DPO)
(Rafailov et al., 2023) have gained traction in recent months because of their simplicity (Zhao
et al., 2023a; Azar et al., 2023). Instead of learning a reward function and then using RL,
direct alignment methods use the relationship between reward functions and policies in the
contextual bandit setting to optimize both simultaneously. Similar ideas have since been
applied to vision language (Zhao et al., 2023b) and image generation models (Lee et al., 2023).
*Denotes equal contribution
1arXiv:2404.12358v1 [cs.LG] 18 Apr 2024 |
2112.07868.pdf | Few-shot Instruction Prompts for Pretrained Language Models to Detect
Social Biases
Shrimai Prabhumoye1, Rafal Kocielnik2, Mohammad Shoeybi1,
Anima Anandkumar1,2, Bryan Catanzaro1
1NVIDIA,2California Institute of Technology
{sprabhumoye@nvidia.com, rafalko@caltech.edu}
Abstract
Warning: this paper contains content that may
be offensive or upsetting.
Detecting social bias in text is challenging due
to nuance, subjectivity, and difficulty in ob-
taining good quality labeled datasets at scale,
especially given the evolving nature of so-
cial biases and society. To address these
challenges, we propose a few-shot instruction-
based method for prompting pre-trained lan-
guage models (LMs). We select a few class-
balanced exemplars from a small support
repository that are closest to the query to be
labeled in the embedding space. We then pro-
vide the LM with instruction that consists of
this subset of labeled exemplars, the query
text to be classified, a definition of bias, and
prompt it to make a decision. We demon-
strate that large LMs used in a few-shot con-
text can detect different types of fine-grained
biases with similar and sometimes superior ac-
curacy to fine-tuned models. We observe that
the largest 530B parameter model is signifi-
cantly more effective in detecting social bias
compared to smaller models (achieving at least
13% improvement in AUC metric compared
to other models). It also maintains a high
AUC (dropping less than 2%) when the labeled
repository is reduced to as few as 100samples.
Large pretrained language models thus make it
easier and quicker to build new bias detectors.
1 Introduction
Detecting social bias in text is of utmost importance
as stereotypes and biases can be projected through
language (Fiske, 1993). Detecting bias is challeng-
ing because it can be expressed through seemingly
innocuous statements which are implied and rarely
explicit, and the interpretation of bias can be sub-
jective leading to noise in labels. In this work, we
focus on detecting social bias in text as defined in
Sap et al. (2020) using few-shot instruction-based
prompting of pre-trained language models (LMs).Current approaches that detect bias require large
labeled datasets to train the models (Chung et al.,
2019; Waseem and Hovy, 2016; Zampieri et al.,
2019; Davidson et al., 2017a). Collecting such
labeled sets is an expensive process and hence
they are not easily available. Furthermore, most
of the prior work relies on finetuning (Sap et al.,
2020; Mandl et al., 2019; Zampieri et al., 2019)
neural architectures which is costly in case of
large LMs (Strubell et al., 2019) and access to
finetune large LMs may be limited (Brown et al.,
2020). Prior work on bias detection has not fo-
cused on modeling multiple types of biases across
datasets as it requires careful optimization to suc-
ceed (Hashimoto et al., 2017; Søgaard and Gold-
berg, 2016; Ruder, 2017). Finetuning a model
can also lead to over-fitting especially in case of
smaller train sets and to catastrophic forgetting of
knowledge present in the pre-trained model (Fatemi
et al., 2021). Moreover, finetuning approaches are
prone to be affected by noisy labels (Song et al.,
2022) which is especially an issue with datasets
for bias detection. The human labeling used to an-
notate these datasets can introduce bias and noisy
labels (Hovy and Prabhumoye, 2021).
We harness the knowledge present in large scale
pre-trained language models (Davison et al., 2019;
Zhou et al., 2020; Petroni et al., 2019; Zhong et al.,
2021; Shin et al., 2020) to detect a rich set of bi-
ases. Our method prompts the LM with a textual
post and labeled exemplars along with instructions
to detect bias in the given post. We explore the
capabilities of LMs to flexibly accommodate differ-
ent dimensions of bias without any finetuning and
with limited access to labeled samples (few-shot
classification).
Prompt-engineering plays a central role in
finetuning-free approaches (Liu et al., 2021b). It
is the process of creating a prompting function that
results in the best performance on the desired down-
stream task. Prompt-engineering can be performedarXiv:2112.07868v2 [cs.CL] 15 Apr 2022 |
2101.03288.pdf | How to Train Your Energy-Based Models
Yang Song yangsong@cs.stanford.edu
Stanford University
Diederik P. Kingma dpkingma@google.com
Google Research
Abstract
Energy-Based Models (EBMs), also known as non-normalized probabilistic models, specify
probability density or mass functions up to an unknown normalizing constant. Unlike
most other probabilistic models, EBMs do not place a restriction on the tractability of
the normalizing constant, thus are more flexible to parameterize and can model a more
expressive family of probability distributions. However, the unknown normalizing constant
of EBMs makes training particularly difficult. Our goal is to provide a friendly introduction
to modern approaches for EBM training. We start by explaining maximum likelihood
training with Markov chain Monte Carlo (MCMC), and proceed to elaborate on MCMC-free
approaches, including Score Matching (SM) and Noise Constrastive Estimation (NCE).
We highlight theoretical connections among these three approaches, and end with a brief
survey on alternative training methods, which are still under active research. Our tutorial
is targeted at an audience with basic understanding of generative models who want to apply
EBMs or start a research project in this direction.
1. Introduction
Probabilistic models with a tractable likelihood are a double-edged sword. On one hand, a
tractable likelihood allows for straightforward comparison between models, and straightfor-
ward optimization of the model parameters w.r.t. the log-likelihood of the data. Through
tractable models such as autoregressive (Graves, 2013; Germain et al., 2015; Van Oord et al.,
2016) or flow-based generative models (Dinh et al., 2014, 2016; Rezende and Mohamed,
2015), we can learn flexible models of high-dimensional data. In some cases even though
the likelihood is not completely tractable, we can often compute and optimize a tractable
lower bound of the likelihood, as in the framework of variational autoencoders (Kingma and
Welling, 2014; Rezende et al., 2014).
Still, the set of models with a tractable likelihood is constrained. Models with a tractable
likelihood need to be of a certain form: for example, in case of autoregressive models, the
model distribution is factorized as a product of conditional distributions, and in flow-based
generative models the data is modeled as an invertible transformation of a base distribution.
In case of variational autoencoders, the data must be modeled as a directed latent-variable
model. A tractable likelihood is related to the fact that these models assume that exact
synthesis of pseudo-data from the model can be done with a specified, tractable procedure.
These assumptions are not always natural.
Energy-based models (EBM) are much less restrictive in functional form: instead of speci-
fying a normalized probability, they only specify the unnormalized negative log-probability,
1arXiv:2101.03288v2 [cs.LG] 17 Feb 2021 |
2303.07487v2.pdf | Using VAEs to Learn Latent Variables: Observations on
Applications in cryo-EM
Edelberg, Daniel G.
Yale UniversityLederman, Roy R.
Yale University
May 12, 2023
Abstract
Variational autoencoders (VAEs) are a popular generative model used to approximate distributions.
The encoder part of the VAE is used in amortized learning of latent variables, producing a latent rep-
resentation for data samples. Recently, VAEs have been used to characterize physical and biological
systems. In this case study, we qualitatively examine the amortization properties of a VAE used in
biological applications. We find that in this application the encoder bears a qualitative resemblance to
more traditional explicit representation of latent variables.
1 Introduction
Variational Autoencoders (VAEs) provide a deep learning method for efficient approximate inference for
problems with continuous latent variables. A brief reminder about VAEs is presented in Section 2.1; a more
complete description can be found, inter alia, in [1, 2, 3, 4, 5, 6]. Since their introduction, VAEs have found
success in a wide variety of fields. Recently, they have been used in scientific applications and physical
systems [7, 8, 9, 10, 11].
Given a set of data x={xi}, VAEs simultaneously learn an encoder Enc ξthat expresses a conditional
distribution qξ(z|x) of a latent variable zigiven a sample xi, and a decoder Dec θwhich expresses the
conditional distribution pθ(x|z). They are trained using empirical samples to approximate the distribution
pθ(x,z).
In this work we focus on the properties of the encoder distribution qξ(z|x) that arise as an approximation of
the distribution pθ(z|x). A single encoder qξ(z|x) is optimized to be able to produce the distribution of latent
variablezfor any input x, which is a form of amortization. Intuitively, one might expect that the encoder
qξ(z|x) would generalize well to plausible inputs that it has not encountered during the optimization/training
procedure. Indeed, this generalization is observed in many applications, and the ability of the encoder
to compute the latent variables for new unseen data points is used in some applications. In addition,
the variational construction sidesteps a statistical problem by marginalizing over the latent variables to
approximate the maximum-likelihood estimator (MLE) for some parameters θof the distribution pθ(x,z),
rather than θandthe latent variables ziassociated with each sample xi. In the latter case, the number
of variables grows with the number of samples and the estimates of pθ(x,z) may not converge to the true
solution.
We present a qualitative case study of the amortization in VAEs in a physical problem, looking at a VAE
applied to the problem of continuous heterogeneity in cryo-electron microscopy (cryo-EM), implemented in
CryoDRGN [7]. We examine the hypothesis that the encoder in this VAE generalizes well to previously unseen
data, and we compare the use of a VAE to the use of an explicit variational estimation of the distribution of
the latent variables. In order to study the generalization in a realistic environment, we exploit well-known
invariances and approximate invariances in cryo-EM data to produce natural tests.
Our case study suggests that in this case the encoder does not seem to generalize well; this can arguably
be interpreted as a form of overfitting of the data. Furthermore, we find that using explicit latent variables
1arXiv:2303.07487v2 [stat.ML] 10 May 2023 |
2205.12365.pdf | Low-rank Optimal Transport:
Approximation, Statistics and Debiasing
Meyer Scetbon
CREST, ENSAE
meyer.scetbon@ensae.frMarco Cuturi
Apple and CREST, ENSAE
cuturi@apple.com
Abstract
The matching principles behind optimal transport (OT) play an increasingly impor-
tant role in machine learning, a trend which can be observed when OT is used to
disambiguate datasets in applications (e.g. single-cell genomics) or used to improve
more complex methods (e.g. balanced attention in transformers or self-supervised
learning). To scale to more challenging problems, there is a growing consensus that
OT requires solvers that can operate on millions, not thousands, of points. The low-
rank optimal transport (LOT) approach advocated in Scetbon et al. [2021] holds
several promises in that regard, and was shown to complement more established
entropic regularization approaches, being able to insert itself in more complex
pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to
those that have a low-nonnegative rank, yielding linear time algorithms in cases
of interest. However, these promises can only be fulfilled if the LOT approach
is seen as a legitimate contender to entropic regularization when compared on
properties of interest, where the scorecard typically includes theoretical properties
(statistical complexity and relation to other methods) or practical aspects (debiasing,
hyperparameter tuning, initialization). We target each of these areas in this paper
in order to cement the impact of low-rank approaches in computational OT.
1 Introduction
Optimal transport (OT) is used across data-science to put in correspondence different sets of observa-
tions. These observations may come directly from datasets, or, in more advanced applications, depict
intermediate layered representations of data. OT theory provides a single grammar to describe and
solve increasingly complex matching problems (linear, quadratic, regularized, unbalanced, etc...),
making it gain a stake in various areas of science such as as single-cell biology Schiebinger et al.
[2019], Yang et al. [2020], Demetci et al. [2020], imaging Schmitz et al. [2018], Heitz et al. [2020],
Zheng et al. [2020] or neuroscience Janati et al. [2020], Koundal et al. [2020].
Regularized approaches to OT. Solving OT problems at scale poses, however, formidable chal-
lenges. The most obvious among them is computational: the Kantorovich [1942] problem on discrete
measures of size nis a linear program that requires O(n3logn)operations to be solved. A second
and equally important challenge lies in the estimation of OT in high-dimensional settings, since it
suffers from the curse-of-dimensionality Fournier and Guillin [2015]. The advent of regularized
approaches, such as entropic regularization [Cuturi, 2013], has pushed these boundaries thanks for
faster algorithms [Chizat et al., 2020, Clason et al., 2021] and improved statistical aspects [Genevay
et al., 2018a]. Despite these clear strengths, regularized OT solvers remain, however, costly as they
typically scale quadratically in the number of observations.
Scaling up OT using low-rank couplings. While it is always intuitively possible to reduce the size
of measures (e.g. using k-means) prior to solving an OT between them, a promising line of work
proposes to combine both [Forrow et al., 2019, Scetbon et al., 2021, 2022]. Conceptually, these
Preprint. Under review.arXiv:2205.12365v2 [stat.ML] 15 Sep 2022 |
2207.06569.pdf | Benign, Tempered, or Catastrophic:
A Taxonomy of Over/f_itting
Neil Mallinar∗
UC San Diego
nmallina@ucsd.eduJames B. Simon∗
UC Berkeley
james.simon@berkeley.eduAmirhesam Abedsoltan
UC San Diego
aabedsoltan@ucsd.edu
Parthe Pandit
UC San Diego
parthepandit@ucsd.eduMikhail Belkin
UC San Diego
mbelkin@ucsd.eduPreetum Nakkiran
Apple & UC San Diego
preetum@apple.com
Abstract
The practical success of overparameterized neural networks has motivated the recent scienti/f_ic study of interpo-
lating methods , which perfectly /f_it their training data. Certain interpolating methods, including neural networks,
can /f_it noisy training data without catastrophically bad test performance, in de/f_iance of standard intuitions from
statistical learning theory. Aiming to explain this, a body of recent work has studied benign over/f_itting , a phenomenon
where some interpolating methods approach Bayes optimality, even in the presence of noise. In this work we argue
that while benign over/f_itting has been instructive and fruitful to study, many real interpolating methods like neural
networks do not /f_it benignly : modest noise in the training set causes nonzero (but non-in/f_inite) excess risk at test time,
implying these models are neither benign nor catastrophic but rather fall in an intermediate regime. We call this
intermediate regime tempered over/f_itting , and we initiate its systematic study. We /f_irst explore this phenomenon in the
context of kernel (ridge) regression (KR) by obtaining conditions on the ridge parameter and kernel eigenspectrum
under which KR exhibits each of the three behaviors. We /f_ind that kernels with powerlaw spectra, including Laplace
kernels and ReLU neural tangent kernels, exhibit tempered over/f_itting. We then empirically study deep neural
networks through the lens of our taxonomy, and /f_ind that those trained to interpolation are tempered, while those
stopped early are benign. We hope our work leads to a more re/f_ined understanding of over/f_itting in modern learning.
1 Introduction
In the last decade, the dramatic success of overparameterized deep neural networks (DNNs) has inspired the /f_ield
to reexamine the theoretical foundations of generalization. Classical statistical learning theory suggests that an
algorithm which interpolates (i.e. perfectly /f_its) its training data will typically catastrophically over/f_it at test time,
generalizing no better than a random function1
Figure 1c illustrates the catastrophic over/f_itting classically expected of an interpolating method. Defying this
picture, DNNs can interpolate their training data and generalize well nonetheless [Neyshabur et al., 2015, Zhang
et al., 2017], suggesting the need for a new theoretical paradigm within which to understand their over/f_itting.
This need motivated the identi/f_ication and study of benign over/f_itting using the terminology of [Bartlett et al.,
2020] (also called “harmless interpolation” [Muthukumar et al., 2020]), a phenomenon in which certain methods that
perfectly /f_it the training data still approach Bayes-optimal generalization in the limit of large trainset size. Intuitively
speaking, benignly-over/f_itting methods /f_it the target function globally, yet /f_it the noise only locally, and the addition
of more label noise does not asymptotically degrade generalization. Figure 1a illustrates a simple method that is
∗Co-/f_irst authors.
1There are various ways to formalize this prediction depending on the setting: it is a consequence of the “bias-variance tradeoff” in statistics,
the “bias-complexity tradeoff” in PAC learning, and “capacity control”-based generalization bounds in kernel ridge regression. .
1arXiv:2207.06569v2 [cs.LG] 20 Oct 2022 |
1909.08593v2.pdf | Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler∗Nisan Stiennon∗Jeffrey Wu Tom B. Brown
Alec Radford Dario Amodei Paul Christiano Geoffrey Irving
OpenAI
{dmz,nisan,jeffwu,tom,alec,damodei,paul,irving}@openai.com
Abstract
Reward learning enables the application of rein-
forcement learning (RL) to tasks where reward is
defined by human judgment, building a model of
reward by asking humans questions. Most work
on reward learning has used simulated environ-
ments, but complex information about values is of-
ten expressed in natural language, and we believe
reward learning for language is a key to making
RL practical and safe for real-world tasks. In this
paper, we build on advances in generative pretrain-
ing of language models to apply reward learning
to four natural language tasks: continuing text
with positive sentiment or physically descriptive
language, and summarization tasks on the TL;DR
and CNN/Daily Mail datasets. For stylistic con-
tinuation we achieve good results with only 5,000
comparisons evaluated by humans. For summa-
rization, models trained with 60,000 comparisons
copy whole sentences from the input but skip irrel-
evant preamble; this leads to reasonable ROUGE
scores and very good performance according to
our human labelers, but may be exploiting the fact
that labelers rely on simple heuristics.
1. Introduction
We would like to apply reinforcement learning to complex
tasks defined only by human judgment, where we can only
tell whether a result is good or bad by asking humans. To
do this, we can first use human labels to train a model of
reward, and then optimize that model. While there is a long
history of work learning such models from humans through
interaction, this work has only recently been applied to mod-
ern deep learning, and even then has only been applied to
relatively simple simulated environments (Christiano et al.,
2017; Ibarz et al., 2018; Bahdanau et al., 2018). By contrast,
real world settings in which humans need to specify com-
*Equal contribution. Correspondence to paul@openai.com.plex goals to AI agents are likely to both involve and require
natural language, which is a rich medium for expressing
value-laden concepts. Natural language is particularly im-
portant when an agent must communicate back to a human
to help provide a more accurate supervisory signal (Irving
et al., 2018; Christiano et al., 2018; Leike et al., 2018).
Natural language processing has seen substantial recent ad-
vances. One successful method has been to pretrain a large
generative language model on a corpus of unsupervised data,
then fine-tune the model for supervised NLP tasks (Dai and
Le, 2015; Peters et al., 2018; Radford et al., 2018; Khandel-
wal et al., 2019). This method often substantially outper-
forms training on the supervised datasets from scratch, and
a single pretrained language model often can be fine-tuned
for state of the art performance on many different super-
vised datasets (Howard and Ruder, 2018). In some cases,
fine-tuning is not required: Radford et al. (2019) find that
generatively trained models show reasonable performance
on NLP tasks with no additional training (zero-shot).
There is a long literature applying reinforcement learning to
natural language tasks. Much of this work uses algorithmi-
cally defined reward functions such as BLEU for translation
(Ranzato et al., 2015; Wu et al., 2016), ROUGE for summa-
rization (Ranzato et al., 2015; Paulus et al., 2017; Wu and
Hu, 2018; Gao et al., 2019b), music theory-based rewards
(Jaques et al., 2017), or event detectors for story generation
(Tambwekar et al., 2018). Nguyen et al. (2017) used RL
on BLEU but applied several error models to approximate
human behavior. Wu and Hu (2018) and Cho et al. (2019)
learned models of coherence from existing text and used
them as RL rewards for summarization and long-form gen-
eration, respectively. Gao et al. (2019a) built an interactive
summarization tool by applying reward learning to one ar-
ticle at a time. Experiments using human evaluations as
rewards include Kreutzer et al. (2018) which used off-policy
reward learning for translation, and Jaques et al. (2019)
which applied the modified Q-learning methods of Jaques
et al. (2017) to implicit human preferences in dialog. Yi
et al. (2019) learned rewards from humans to fine-tune dia-
log models, but smoothed the rewards to allow supervised
learning. We refer to Luketina et al. (2019) for a survey ofarXiv:1909.08593v2 [cs.CL] 8 Jan 2020 |
1406.2661.pdf | Generative Adversarial Nets
Ian J. Goodfellow, Jean Pouget-Abadie∗, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair†, Aaron Courville, Yoshua Bengio‡
D´epartement d’informatique et de recherche op ´erationnelle
Universit ´e de Montr ´eal
Montr ´eal, QC H3C 3J7
Abstract
We propose a new framework for estimating generative models via an adversar-
ial process, in which we simultaneously train two models: a generative model G
that captures the data distribution, and a discriminative model Dthat estimates
the probability that a sample came from the training data rather than G. The train-
ing procedure for Gis to maximize the probability of Dmaking a mistake. This
framework corresponds to a minimax two-player game. In the space of arbitrary
functionsGandD, a unique solution exists, with Grecovering the training data
distribution and Dequal to1
2everywhere. In the case where GandDare defined
by multilayer perceptrons, the entire system can be trained with backpropagation.
There is no need for any Markov chains or unrolled approximate inference net-
works during either training or generation of samples. Experiments demonstrate
the potential of the framework through qualitative and quantitative evaluation of
the generated samples.
1 Introduction
The promise of deep learning is to discover rich, hierarchical models [2] that represent probability
distributions over the kinds of data encountered in artificial intelligence applications, such as natural
images, audio waveforms containing speech, and symbols in natural language corpora. So far, the
most striking successes in deep learning have involved discriminative models, usually those that
map a high-dimensional, rich sensory input to a class label [14, 22]. These striking successes have
primarily been based on the backpropagation and dropout algorithms, using piecewise linear units
[19, 9, 10] which have a particularly well-behaved gradient . Deep generative models have had less
of an impact, due to the difficulty of approximating many intractable probabilistic computations that
arise in maximum likelihood estimation and related strategies, and due to difficulty of leveraging
the benefits of piecewise linear units in the generative context. We propose a new generative model
estimation procedure that sidesteps these difficulties.1
In the proposed adversarial nets framework, the generative model is pitted against an adversary: a
discriminative model that learns to determine whether a sample is from the model distribution or the
data distribution. The generative model can be thought of as analogous to a team of counterfeiters,
trying to produce fake currency and use it without detection, while the discriminative model is
analogous to the police, trying to detect the counterfeit currency. Competition in this game drives
both teams to improve their methods until the counterfeits are indistiguishable from the genuine
articles.
∗Jean Pouget-Abadie is visiting Universit ´e de Montr ´eal from Ecole Polytechnique.
†Sherjil Ozair is visiting Universit ´e de Montr ´eal from Indian Institute of Technology Delhi
‡Yoshua Bengio is a CIFAR Senior Fellow.
1All code and hyperparameters available at http://www.github.com/goodfeli/adversarial
1arXiv:1406.2661v1 [stat.ML] 10 Jun 2014 |
2402.10171.pdf | Data Engineering for Scaling Language Models to 128K Context
Yao FuκRameswar PandaηXinyao NiuµXiang YueπHannaneh HajishirziσYoon KimλHao Pengδ
κUniversity of EdinburghηMIT-IBM Watson AI LabµUniversity of MelbourneπOhio State University
σUniversity of WashingtonλMITδUIUC
yao.fu@ed.ac.uk yoonkim@mit.edu haopeng@illinois.edu
https://github.com/FranxYao/Long-Context-Data-Engineering
Abstract
We study the continual pretraining recipe for scal-
ing language models’ context lengths to 128K,
with a focus on data engineering. We hypoth-
esize that long context modeling, in particular
the ability to utilize information at arbitrary in-
put locations , is a capability that is mostly al-
ready acquired through large-scale pretraining,
and that this capability can be readily extended
to contexts substantially longer than seen during
training (e.g., 4K to 128K) through lightweight
continual pretraining on appropriate data mix-
ture. We investigate the quantity andquality of
the data for continual pretraining: (1) for quan-
tity, we show that 500 million to 5 billion to-
kens are enough to enable the model to retrieve
information anywhere within the 128K context;
(2) for quality, our results equally emphasize do-
main balance andlength upsampling . Concretely,
we find that na ¨ıvely upsampling longer data on
certain domains like books, a common practice
of existing work, gives suboptimal performance,
and that a balanced domain mixture is impor-
tant. We demonstrate that continual pretraining
of the full model on 1B-5B tokens of such data
is an effective and affordable strategy for scaling
the context length of language models to 128K.
Our recipe outperforms strong open-source long-
context models and closes the gap to frontier mod-
els like GPT-4 128K.
1. Introduction
A context window of 128K tokens enables large language
models to perform tasks that significantly beyond exist-
ing paradigm, such as multi-document question answer-
ing (Caciularu et al., 2023), repository-level code under-
standing (Bairi et al., 2023), long-history dialog model-
ing (Mazumder & Liu, 2024), and language model-powered
autonomous agents (Weng, 2023). A popular testbed forwhether models can actually utilize long context length
is the recent Needle-in-a-Haystack test (Kamradt, 2023),
which asks the model to precisely recite the information
in a given sentence where the sentence (the “needle”) is
placed in an arbitrary location of a 128K long document (the
“haystack”). In the open-source space, although works like
LongLoRA (Chen et al., 2023b) and YaRN-Mistral (Peng
et al., 2023) theoretically support 100K context, they are
not able to pass this test at such context lengths, as shown
in Fig. 1. Currently, only closed-source frontier models like
GPT-4 128K have demonstrated strong performance on the
Needle-in-a-Haystack test.
This work investigates data engineering methods for scaling
language models’ context lengths. Our objective is to con-
tinue pretraining the language model on appropriate data
mixtures such that it can pass the Needle-in-a-Haystack test
at 128K length. Given that most existing models are trained
on less than 4K context length (Touvron et al., 2023a) and
that attention has quadratic complexity, continual pretrain-
ing with full attention on much longer context lengths (we
train on 64K-80K context lengths) may seem prohibitively
costly at a first glance. However, we show that this is feasi-
ble under academic-level resources (see Table 2). We use
LLaMA-2 7B and 13B as our base models. We do not make
any significant change to model architecture other than ad-
justing the base of RoPE, as in Xiong et al. (2023). Our
major focus is the data recipe: what andhow much data is
able to well-adapt a model to pass the Needle-in-a-Haystack
test at 128K context length.
We hypothesize that the capability to utilize information at
arbitrary locations within long context length is (mostly)
already acquired during pretraining, even for models pre-
trained on substantially shorter 4K contexts. This hypothe-
sis is in contrast to existing works like Xiong et al. (2023);
XVerse (2024), which perform continual pretraining on a
large amount of data (400B tokens) to inject long-context-
modeling capabilities; in this strategy, the cost can be as
high as pre-training from scratch. In this work we show
that continual pretraining on a small amount of long-context
data, in our case, 1-5B tokens, can “unlock” a 7B model’s
1arXiv:2402.10171v1 [cs.CL] 15 Feb 2024 |
2402.03175v1.pdf | 1
THEMATRIX : A B AYESIAN LEARNING MODEL FOR LLM S
Siddhartha Dalal
Department of Statistics
Columbia University
The City of New York
sd2803@columbia.eduVishal Misra
Department of Computer Science
Columbia University
The City of New York
vishal.misra@columbia.edu
ABSTRACT
In this paper, we introduce a Bayesian learning model to understand the behavior of Large Language
Models (LLMs). We explore the optimization metric of LLMs, which is based on predicting the next
token, and develop a novel model grounded in this principle. Our approach involves constructing an
ideal generative text model represented by a multinomial transition probability matrix with a prior,
and we examine how LLMs approximate this matrix. We discuss the continuity of the mapping
between embeddings and multinomial distributions, and present the Dirichlet approximation theorem
to approximate any prior. Additionally, we demonstrate how text generation by LLMs aligns with
Bayesian learning principles and delve into the implications for in-context learning, specifically
explaining why in-context learning emerges in larger models where prompts are considered as
samples to be updated. Our findings indicate that the behavior of LLMs is consistent with Bayesian
Learning, offering new insights into their functioning and potential applications.
1 Introduction
The advent of LLMs, starting with GPT3 [ 2], has revolutionized the world of natural language processing, and the
introduction of ChatGPT [ 14] has taken the world by storm. There have been several approaches to try and understand
how these models work, and in particular how “few-shot" or “in context learning" works [ 10,11,9], and it is an ongoing
pursuit. In our work we look at the workings of an LLM from a novel standpoint, and develop a Bayesian model to
explain their behavior. We focus on the optimization metric of next token prediction for these LLMs, and use that to
build an abstract probability matrix which is the cornerstone of our model and analysis. We show in our paper that the
behavior of LLMs is consistent with Bayesian learning and explain many empirical observations of the LLMs using our
model.
1.1 Paper organization and our contributions
We first describe our approach at a high level, and in the rest of the paper get into the details of the approach. We
focus on the optimization metric of these LLMs, namely, predict the next token, and develop the model from there on.
We first describe the ideal generative text model (Section 2.1), and relate it to its representation of an abstract (and
enormous) multinomial transition probability matrix. We argue that the optimization metric results in these LLMs
learning to represent this probability matrix during training, and text generation is nothing but picking a multinomial
distribution from a specific row of this matrix. This matrix, however is infeasible to be represented by the LLMs, even
with billions of parameters, so the LLMs learn to approximate it. Further, the training data is a subset of the entire text
in the world, so the learnt matrix is an approximation and reflection of the matrix induced by the training data, rather
than the a representation of the ideal matrix. Next (Section 3), we relate the rows of this matrix to the embeddings of the
prompt and prove (Theorem 3.1) a result on the continuity of the mapping between the embeddings and the multinomial
distribution induced by the embedding. We then prove (Theorem 4.1) that any prior over multinomial distribution can
be represented as a finite mixture of Dirichlet distributions. We then argue, and demonstrate (Section 5.2) that text
∗The authors are listed in alphabetical order.arXiv:2402.03175v1 [cs.LG] 5 Feb 2024 |
2402.04845.pdf | AlphaFold Meets Flow Matching for Generating Protein Ensembles
Bowen Jing1Bonnie Berger1 2Tommi Jaakkola1
Abstract
The biological functions of proteins often de-
pend on dynamic structural ensembles. In this
work, we develop a flow-based generative mod-
eling approach for learning and sampling the
conformational landscapes of proteins. We re-
purpose highly accurate single-state predictors
such as AlphaFold and ESMFold and fine-tune
them under a custom flow matching framework
to obtain sequence-conditoned generative mod-
els of protein structure called Alpha FLOW and
ESM FLOW . When trained and evaluated on
the PDB, our method provides a superior com-
bination of precision and diversity compared to
AlphaFold with MSA subsampling. When fur-
ther trained on ensembles from all-atom MD,
our method accurately captures conformational
flexibility, positional distributions, and higher-
order ensemble observables for unseen proteins.
Moreover, our method can diversify a static
PDB structure with faster wall-clock convergence
to certain equilibrium properties than replicate
MD trajectories, demonstrating its potential as a
proxy for expensive physics-based simulations.
Code is available at https://github.com/
bjing2016/alphaflow .
1. Introduction
Proteins adopt complex three-dimensional structures, often
as members of structural ensembles with distinct states, col-
lective motions, and disordered fluctuations, to carry out
their biological functions. For example, conformational
changes are critical in the function of transporters, channels,
and enzymes, and the properties of equilibrium ensembles
help govern the strength and selectivity of molecular interac-
tions (Meller et al., 2023; V ¨ogele et al., 2023). While deep
learning methods such as AlphaFold (Jumper et al., 2021)
have excelled in the single-state modeling of experimental
protein structures, they fail to account for this conforma-
tional heterogeneity (Lane, 2023; Ourmazd et al., 2022).
1CSAIL, Massachusetts Institute of Technology2Department
of Mathematics, Massachusetts Institute of Technology. Corre-
spondence to: Bowen Jing <bjing@mit.edu >.Hence, a method which builds upon the level of accuracy of
single-structure predictors, but reveals underlying structural
ensembles, would be of great value to structural biologists.
Existing machine learning approaches for generating struc-
tural ensembles have focused on inference-time interven-
tions in AlphaFold that modify the multiple sequence
alignment (MSA) input (Del Alamo et al., 2022; Stein &
Mchaourab, 2022; Wayment-Steele et al., 2023), resulting in
a different structure prediction for each version of the MSA.
While these approaches have demonstrated some success,
they suffer from two key limitations. First, by operating on
the MSA, they cannot be generalized to structure predictors
based on protein language models (PLMs) such as ESMFold
(Lin et al., 2023) or OmegaFold (Wu et al., 2022), which
have grown in popularity due to their fast runtime and ease
of use. Secondly, these inference-time interventions do not
provide the capability to train on protein ensembles from
beyond the PDB—for example, ensembles from molecular
dynamics, which are of significant scientific interest but can
be extremely expensive to simulate (Shaw et al., 2010).
To address these limitations, in this work we combine Al-
phaFold and ESMFold with flow matching , a recent genera-
tive modeling framework (Lipman et al., 2022; Albergo &
Vanden-Eijnden, 2022), to propose a principled method for
sampling the conformational landscape of proteins. While
AlphaFold and ESMFold were originally developed and
trained as regression models that predict a single best protein
structure for a given MSA or sequence input, we develop
a strategy for repurposing them as (sequence-conditioned)
generative models of protein structure. This synthesis relies
on the key insight that iterative denoising frameworks (such
as diffusion and flow-matching) provide a general recipe
for converting regression models to generative models with
relatively little modification to the architecture and training
objective. Unlike inference-time MSA ablation, this strat-
egy applies equally well to PLM-based predictors and can
be used to train or fine-tune on arbitrary ensembles.
While flow matching has been well established for images,
its application to protein structures remains nascent (Bose
et al., 2023). Hence, we develop a custom flow matching
framework tailored to the architecture and training practices
of AlphaFold and ESMFold. Our framework leverages the
polymer-structured prior distribution from harmonic diffu-
1arXiv:2402.04845v1 [q-bio.BM] 7 Feb 2024 |
1506.00552.pdf | Coordinate Descent Converges Faster with the
Gauss-Southwell Rule Than Random Selection
Julie Nutini1, Mark Schmidt1, Issam H. Laradji1, Michael Friedlander2, Hoyt Koepke3
1University of British Columbia,2University of California, Davis,3Dato
Abstract
There has been significant recent work on the theory and application of randomized coordinate descent
algorithms, beginning with the work of Nesterov [ SIAM J. Optim., 22(2), 2012 ], who showed that a
random-coordinate selection rule achieves the same convergence rate as the Gauss-Southwell selection
rule. This result suggests that we should never use the Gauss-Southwell rule, because it is typically
much more expensive than random selection. However, the empirical behaviours of these algorithms
contradict this theoretical result: in applications where the computational costs of the selection rules
are comparable, the Gauss-Southwell selection rule tends to perform substantially better than random
coordinate selection. We give a simple analysis of the Gauss-Southwell rule showing that—except in
extreme cases—its convergence rate is faster than choosing random coordinates. We also (i) show that
exact coordinate optimization improves the convergence rate for certain sparse problems, (ii) propose a
Gauss-Southwell-Lipschitz rule that gives an even faster convergence rate given knowledge of the Lipschitz
constants of the partial derivatives, (iii) analyze the effect of approximate Gauss-Southwell rules, and
(iv) analyze proximal-gradient variants of the Gauss-Southwell rule.
1 Coordinate Descent Methods
There has been substantial recent interest in applying coordinate descent methods to solve large-scale op-
timization problems, starting with the seminal work of Nesterov [2012], who gave the first global rate-of-
convergence analysis for coordinate-descent methods for minimizing convex functions. This analysis suggests
that choosing a random coordinate to update gives the same performance as choosing the “best” coordi-
nate to update via the more expensive Gauss-Southwell (GS) rule. (Nesterov also proposed a more clever
randomized scheme, which we consider later in this paper.) This result gives a compelling argument to use
randomized coordinate descent in contexts where the GS rule is too expensive. It also suggests that there
is no benefit to using the GS rule in contexts where it is relatively cheap. But in these contexts, the GS
rule often substantially outperforms randomized coordinate selection in practice. This suggests that either
the analysis of GS is not tight, or that there exists a class of functions for which the GS rule is as slow as
randomized coordinate descent.
After discussing contexts in which it makes sense to use coordinate descent and the GS rule, we answer
this theoretical question by giving a tighter analysis of the GS rule (under strong-convexity and standard
smoothness assumptions) that yields the same rate as the randomized method for a restricted class of
functions, but is otherwise faster (and in some cases substantially faster). We further show that, compared
to the usual constant step-size update of the coordinate, the GS method with exact coordinate optimization
has a provably faster rate for problems satisfying a certain sparsity constraint (Section 5). We believe that
this is the first result showing a theoretical benefit of exact coordinate optimization; all previous analyses
show that these strategies obtain the same rate as constant step-size updates, even though exact optimization
tends to be faster in practice. Furthermore, in Section 6, we propose a variant of the GS rule that, similar
to Nesterov’s more clever randomized sampling scheme, uses knowledge of the Lipschitz constants of the
coordinate-wise gradients to obtain a faster rate. We also analyze approximate GS rules (Section 7), which
1arXiv:1506.00552v2 [math.OC] 28 Oct 2018 |
10.1016.j.acha.2021.12.009.pdf | Appl. Comput. Harmon. Anal. 59 (2022) 85–116
Contents lists available at ScienceDirect
Applied and Computational Harmonic Analysis
www.elsevier.com/locate/acha
Loss landscapes and optimization in over-parameterized
non-linear systems and neural networks
Chaoyue Liua, Libin Zhub,c, Mikhail Belkinc,∗
aDepartment of Computer Science and Engineering, The Ohio State University, United States of America
bDepartment of Computer Science and Engineering, University of California, San Diego, United States
of America
cHalicioğlu Data Science Institute, University of California, San Diego, United States of America
a r t i c l e i n f o a b s t r a c t
Article history:
Received 9 June 2021
Received in revised form 24
December 2021
Accepted 26 December 2021
Available online 10 January 2022
Communicated by David Donoho
Keywords:
Deep learning
Non-linear optimization
Over-parameterized models
PL∗conditionThe success of deep learning is due, to a large extent, to the remarkable effectiveness
of gradient-based optimization methods applied to large neural networks. The
purpose of this work is to propose a modern view and a general mathematical
framework for loss landscapes and efficient optimization in over-parameterized
machine learning models and systems of non-linear equations, a setting that
includes over-parameterized deep neural networks. Our starting observation is that
optimization landscapes corresponding to such systems are generally not convex,
even locally around a global minimum, a condition we call essential non-convexity .
We argue that instead they satisfy PL∗, a variant of the Polyak-Łojasiewicz
condition [32,25]o n most (but not all) of the parameter space, which guarantees
both the existence of solutions and efficient optimization by (stochastic) gradient
descent (SGD/GD). The PL∗condition of these systems is closely related to the
condition number of the tangent kernel associated to a non-linear system showing
how a PL∗-based non-linear theory parallels classical analyses of over-parameterized
linear equations. We show that wide neural networks satisfy the PL∗condition,
which explains the (S)GD convergence to a global minimum. Finally we propose a
relaxation of the PL∗condition applicable to “almost” over-parameterized systems.
© 2021 Elsevier Inc. All rights reserved.
1. Introduction
A singular feature of modern machine learning is a large number of trainable model parameters. Just in
the last few years we have seen state-of-the-art models grow from tens or hundreds of millions parameters
to much larger systems with hundreds billion [ 6]o r even trillions parameters [ 14]. Invariably these models
are trained by gradient descent based methods, such as Stochastic Gradient Descent (SGD) or Adam [ 19].
Why are these local gradient methods so effective in optimizing complex highly non-convex systems? In the
past few years an emerging understanding of gradient-based methods have started to focus on the insight
*Corresponding author.
E-mail address: mbelkin@ucsd.edu (M. Belkin).
https://doi.org/10.1016/j.acha.2021.12.009
1063-5203/© 2021 Elsevier Inc. All rights reserved. |
2309.02390.pdf | 5 September 2023
Explaining grokking through circuit efficiency
Vikrant Varma*, 1, Rohin Shah*, 1, Zachary Kenton1, János Kramár1and Ramana Kumar1
*Equal contributions,1Google DeepMind
One of the most surprising puzzles in neural network generalisation is grokking : a network with perfect
training accuracy but poor generalisation will, upon further training, transition to perfect generalisation.
Weproposethatgrokkingoccurswhenthetaskadmitsageneralisingsolutionandamemorisingsolution,
where the generalising solution is slower to learn but more efficient, producing larger logits with the
same parameter norm. We hypothesise that memorising circuits become more inefficient with larger
training datasets while generalising circuits do not, suggesting there is a critical dataset size at which
memorisationandgeneralisationareequallyefficient. Wemakeandconfirmfournovelpredictionsabout
grokking, providing significant evidence in favour of our explanation. Most strikingly, we demonstrate
two novel and surprising behaviours: ungrokking , in which a network regresses from perfect to low test
accuracy, and semi-grokking , in which a network shows delayed generalisation to partial rather than
perfect test accuracy.
1. Introduction
When training a neural network, we expect that once training loss converges to a low value, the
network will no longer change much. Power et al. (2021) discovered a phenomenon dubbed grokking
that drastically violates this expectation. The network first “memorises” the data, achieving low
and stable training loss with poor generalisation, but with further training transitions to perfect
generalisation. We are left with the question: why does the network’s test performance improve
dramatically upon continued training, having already achieved nearly perfect training performance?
Recent answers to this question vary widely, including the difficulty of representation learning (Liu
etal.,2022), thescaleofparametersatinitialisation(Liuetal.,2023), spikesinloss("slingshots")(Thi-
lak et al., 2022), random walks among optimal solutions (Millidge, 2022), and the simplicity of
the generalising solution (Nanda et al., 2023, Appendix E). In this paper, we argue that the last
explanation is correct, by stating a specific theory in this genre, deriving novel predictions from the
theory, and confirming the predictions empirically.
We analyse the interplay between the internal mechanisms that the neural network uses to
calculate the outputs, which we loosely call “circuits” (Olah et al., 2020). We hypothesise that there
are two families of circuits that both achieve good training performance: one which generalises well
(𝐶gen) and one which memorises the training dataset ( 𝐶mem). The key insight is that when there
are multiple circuits that achieve strong training performance, weight decay prefers circuits with high
“efficiency” , that is, circuits that require less parameter norm to produce a given logit value.
Efficiency answers our question above: if 𝐶genis more efficient than 𝐶mem, gradient descent can
reduce nearly perfect training loss even further by strengthening 𝐶genwhile weakening 𝐶mem, which
then leads to a transition in test performance. With this understanding, we demonstrate in Section 3
that three key properties are sufficient for grokking: (1) 𝐶gengeneralises well while 𝐶memdoes not,
(2)𝐶genis more efficient than 𝐶mem, and (3)𝐶genis learned more slowly than 𝐶mem.
Since𝐶gengeneralises well, it automatically works for any new data points that are added to
the training dataset, and so its efficiency should be independent of the size of the training dataset.
In contrast, 𝐶memmust memorise any additional data points added to the training dataset, and so
Corresponding author(s): vikrantvarma@deepmind.com, rohinmshah@deepmind.comarXiv:2309.02390v1 [cs.LG] 5 Sep 2023 |
10.1016.j.cell.2023.12.035.pdf | Article
Brain-wide neural activity underlying memory-
guided movement
Graphical abstract
Highlights
dAnatomy-guided activity recordings in multi-regional neural
circuits during behavior
dMovement encoding is strongest in the medulla, followed bythe midbrain and cortex
dChoice coding arises in a specific multi-regional circuitdistributed across the brain
dCoding of choice and action exhibit strong correlationsacross brain areasAuthors
Susu Chen, Yi Liu, Ziyue Aiden Wang, ...,Shaul Druckmann, Nuo Li, Karel Svoboda
Correspondence
shauld@stanford.edu (S.D.),
nuo.li@bcm.edu (N.L.),karel.svoboda@alleninstitute.org (K.S.)
In brief
A sparse neural network, distributed
across major brain compartments,produces tightly orchestrated activitypatterns underlying decision-making andmovement initiation.
Anatomy-guided multi-regional simultaneous recordings
Mesoscale activity map data
medulla > midbrain > cortexMovement encoding:
Choice coding is concentrated
in ALM projection zones
StriatumThalamusMidbrain
SelectivityALM inputALMChoice-related activity is
correlated across brain areas
Chen et al., 2024, Cell 187, 676–691
February 1, 2024 ª2024 The Authors. Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.035 ll
|
2309.14525.pdf | Preprint
ALIGNING LARGE MULTIMODAL MODELS
WITH FACTUALLY AUGMENTED RLHF
Zhiqing Sun∗♠, Sheng Shen∗♣, Shengcao Cao∗♢
Haotian Liu♡, Chunyuan Li♮, Yikang Shen△, Chuang Gan†∇△, Liang-Yan Gui†♢
Yu-Xiong Wang†♢, Yiming Yang†♠, Kurt Keutzer†♣, Trevor Darrell†♣
♣UC Berkeley,♠CMU,♢UIUC,♡UW–Madison,∇UMass Amherst
♮Microsoft Research,△MIT-IBM Watson AI Lab
ABSTRACT
Large Multimodal Models (LMM) are built across modalities and the misalign-
ment between two modalities can result in “hallucination”, generating textual out-
puts that are not grounded by the multimodal information in context. To address
the multimodal misalignment issue, we adapt the Reinforcement Learning from
Human Feedback (RLHF) from the text domain to the task of vision-language
alignment, where human annotators are asked to compare two responses and pin-
point the more hallucinated one, and the vision-language model is trained to max-
imize the simulated human rewards. We propose a new alignment algorithm
called Factually Augmented RLHF that augments the reward model with addi-
tional factual information such as image captions and ground-truth multi-choice
options, which alleviates the reward hacking phenomenon in RLHF and further
improves the performance. We also enhance the GPT-4-generated training data
(for vision instruction tuning) with previously available human-written image-
text pairs to improve the general capabilities of our model. To evaluate the pro-
posed approach in real-world scenarios, we develop a new evaluation benchmark
MMH AL-BENCH with a special focus on penalizing hallucinations. As the first
LMM trained with RLHF, our approach achieves remarkable improvement on the
LLaV A-Bench dataset with the 94% performance level of the text-only GPT-4
(while previous best methods can only achieve the 87% level), and an improve-
ment by 60% on MMH AL-BENCH over other baselines. We opensource our code,
model, data at https://llava-rlhf.github.io .
1 I NTRODUCTION
Large Language Models (LLMs; Brown et al. (2020); Chowdhery et al. (2022); OpenAI (2023)) can
delve into the multimodal realm either by further pre-training with image-text pairs (Alayrac et al.;
Awadalla et al., 2023) or by fine-tuning them with specialized vision instruction tuning datasets (Liu
et al., 2023a; Zhu et al., 2023), leading to the emergence of powerful Large Multimodal Models
(LMMs). Yet, developing LMMs faces challenges, notably the gap between the volume and quality
of multimodal data versus text-only datasets. Consider the LLaV A model (Liu et al., 2023a), which is
initialized from a pre-trained vision encoder (Radford et al., 2021) and an instruction-tuned language
model (Chiang et al., 2023). It is trained on just 150K synthetic image-based dialogues, which is
much less in comparison to the text-only models (Flan (Longpre et al., 2023) utilizing over 100M
examples spanning 1800 tasks. Such limitations in data can lead to misalignment between the vision
and language modalities. Consequently, LMMs may produce hallucinated outputs, which are not
accurately anchored to the context provided by images.
To mitigate the challenges posed by the scarcity of high-quality visual instruction tuning data for
LMM training, we introduce LLaVA-RLHF , a vision-language model trained for improved mul-
timodal alignment. One of our key contributions is the adaptation of the Reinforcement Learning
from Human Feedback (RLHF) (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a), a
general and scalable alignment paradigm that shows great success for text-based AI agents, to the
∗Equal contribution. Ordering is determined by dice rolling. †Equal advising.
1arXiv:2309.14525v1 [cs.CV] 25 Sep 2023 |
2306.12672.pdf | From Word Models to World Models:
Translating from Natural Language to the
Probabilistic Language of Thought
Lionel Wong1⋆, Gabriel Grand1⋆, Alexander K. Lew1, Noah D. Goodman2, Vikash K.
Mansinghka1, Jacob Andreas1, Joshua B. Tenenbaum1
⋆Equal contribution.
1MIT,2Stanford
Abstract
How does language inform our downstream thinking? In particular, how do humans make meaning from
language—and how can we leverage a theory of linguistic meaning to build machines that think in more
human-like ways? In this paper, we propose rational meaning construction , a computational framework
for language-informed thinking that combines neural models of language with probabilistic models for
rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language
into a probabilistic language of thought (PLoT)—a general-purpose symbolic substrate for probabilistic,
generative world modeling. Our architecture integrates two powerful computational tools that have not
previously come together: we model thinking with probabilistic programs , an expressive representation for
flexible commonsense reasoning; and we model meaning construction with large language models (LLMs),
which support broad-coverage translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework in action through examples covering
four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual
and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can
generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We
extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics
engines, and goal-directed planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of world models themselves.
We hope this work will help to situate contemporary developments in LLMs within a broader cognitive
picture of human language and intelligence, providing a roadmap towards AI systems that synthesize the
insights of both modern and classical computational perspectives.
1 Introduction
Language expresses the vast internal landscape of our thoughts. We use language to convey what we believe,
what we are uncertain about, and what we do not know. We talk about what we see in the world around
us, and what we imagine in real or wholly hypothetical futures. We discuss what we want and what we
plan to do, and dissect what others want and what we think they will do. We build and pass on new bodies
of knowledge in language—we ask questions and offer explanations, give commands and instructions, and
propose and refute theories. Some of these ideas can be expressed in part through other means. But language
stands apart for its flexibility and breadth, and its seeming proximity to our thoughts.
Whatislanguage? How does language get its meaning, and when should we say that a person or machine
knows, understands, and can use it? What is the relationship between language and the rest of general
cognition—what allows language to inform and support so much of thought? This paper focuses on these
questions as they relate to humanlanguage and thought, in computational terms. What integrated cognitive
theory can model how language relates to the other core systems of human cognition? If we seek to build AI
systems that emulate how humans talk and think, what architecture can integrate language robustly into
systems that support the full scope of our thought?
Code for the examples in this paper is available at: github.com/gabegrand/world-models .
Correspondence: co-primary authors ( zyzzyva@mit.edu, gg@mit.edu ); co-supervisors ( jda@mit.edu, jbt@mit.edu ).arXiv:2306.12672v2 [cs.CL] 23 Jun 2023 |
2210.17323.pdf | Published as a conference paper at ICLR 2023
GPTQ: A CCURATE POST-TRAINING QUANTIZATION
FOR GENERATIVE PRE-TRAINED TRANSFORMERS
Elias Frantar∗
IST AustriaSaleh Ashkboos
ETH ZurichTorsten Hoefler
ETH ZurichDan Alistarh
IST Austria & NeuralMagic
ABSTRACT
Generative Pre-trained Transformer models, known as GPT or OPT, set them-
selves apart through breakthrough performance across complex language mod-
elling tasks, but also by their extremely high computational and storage costs.
Specifically, due to their massive size, even inference for large, highly-accurate
GPT models may require multiple performant GPUs, which limits the usability
of such models. While there is emerging work on relieving this pressure via
model compression, the applicability and performance of existing compression
techniques is limited by the scale and complexity of GPT models. In this paper,
we address this challenge, and propose GPTQ, a new one-shot weight quantiza-
tion method based on approximate second-order information, that is both highly-
accurate and highly-efficient. Specifically, GPTQ can quantize GPT models with
175 billion parameters in approximately four GPU hours, reducing the bitwidth
down to 3 or 4 bits per weight, with negligible accuracy degradation relative to the
uncompressed baseline. Our method more than doubles the compression gains rel-
ative to previously-proposed one-shot quantization methods, preserving accuracy,
allowing us for the first time to execute an 175 billion-parameter model inside a
single GPU for generative inference. Moreover, we also show that our method
can still provide reasonable accuracy in the extreme quantization regime, in which
weights are quantized to 2-bit or even ternary quantization levels. We show ex-
perimentally that these improvements can be leveraged for end-to-end inference
speedups over FP16, of around 3.25x when using high-end GPUs (NVIDIA A100)
and 4.5x when using more cost-effective ones (NVIDIA A6000). The implemen-
tation is available at https://github.com/IST-DASLab/gptq .
1 I NTRODUCTION
Pre-trained generative models from the Transformer (Vaswani et al., 2017) family, commonly known
as GPT or OPT (Radford et al., 2019; Brown et al., 2020; Zhang et al., 2022), have shown break-
through performance for complex language modelling tasks, leading to massive academic and prac-
tical interest. One major obstacle to their usability is computational and storage cost, which ranks
among the highest for known models. For instance, the best-performing model variants, e.g. GPT3-
175B, have in the order of 175 billion parameters and require tens-to-hundreds of GPU years to
train (Zhang et al., 2022). Even the simpler task of inferencing over a pre-trained model, which is
our focus in this paper, is highly challenging: for instance, the parameters of GPT3-175B occupy
326GB (counting in multiples of 1024) of memory when stored in a compact float16 format. This
exceeds the capacity of even the highest-end single GPUs, and thus inference must be performed
using more complex and expensive setups, such as multi-GPU deployments.
Although a standard approach to eliminating these overheads is model compression , e.g. (Hoefler
et al., 2021; Gholami et al., 2021), surprisingly little is known about compressing such models for
inference. One reason is that more complex methods for low-bitwidth quantization or model prun-
ing usually require model retraining , which is extremely expensive for billion-parameter models.
Alternatively, post-training methods (Nagel et al., 2020; Wang et al., 2020; Hubara et al., 2020;
Nahshan et al., 2021), which compress the model in one shot, without retraining, would be very
appealing. Unfortunately, the more accurate variants of such methods (Li et al., 2021; Hubara et al.,
2021; Frantar et al., 2022) are complex and challenging to scale to billions of parameters (Yao et al.,
∗Corresponding author: elias.frantar@ist.ac.at
1arXiv:2210.17323v2 [cs.LG] 22 Mar 2023 |
10.1016.j.cell.2024.01.026.pdf | Article
Cryo-EM structures of the plant plastid-encoded
RNA polymerase
Graphical abstract
Highlights
dPlant chloroplast RNA polymerase comprises a catalytic
core and four peripheral modules
dThe scaffold module stabilizes the catalytic core and bridgesother modules
dThe protection module has SOD activity, and the RNAmodule recognizes RNA sequence
dThe regulation module likely controls transcription activity ofthe catalytic coreAuthors
Xiao-Xian Wu, Wen-Hui Mu, Fan Li, ...,Chanhong Kim, Fei Zhou, Yu Zhang
Correspondence
zhoufei@mail.hzau.edu.cn (F.Z.),
yzhang@cemps.ac.cn (Y.Z.)
In brief
The cryo-EM structures of Nicotiana
tabacum (tobacco) chloroplast RNA
polymerase apoenzyme and transcriptionelongation complexes reveal thecomposition, assembly, function, andevolution of the chloroplast transcriptionapparatus.
Regulation
module
Regulation
module
Wu et al., 2024, Cell 187, 1127–1144
February 29, 2024 ª2024 Elsevier Inc.
https://doi.org/10.1016/j.cell.2024.01.026 ll
|
10.1038.s41467-021-26529-9.pdf | ARTICLE
The generative capacity of probabilistic protein
sequence models
Francisco McGee1,2,3, Sandro Hauri4,5, Quentin Novinger2,5, Slobodan Vucetic4,5, Ronald M. Levy1,3,6,7,
Vincenzo Carnevale2,3✉& Allan Haldane1,7✉
Potts models and variational autoencoders (VAEs) have recently gained popularity as gen-
erative protein sequence models (GPSMs) to explore fitness landscapes and predict mutation
effects. Despite encouraging results, current model evaluation metrics leave unclear whetherGPSMs faithfully reproduce the complex multi-residue mutational patterns observed innatural sequences due to epistasis. Here, we develop a set of sequence statistics to assessthe “generative capacity ”of three current GPSMs: the pairwise Potts Hamiltonian, the VAE,
and the site-independent model. We show that the Potts model ’s generative capacity is
largest, as the higher-order mutational statistics generated by the model agree with thoseobserved for natural sequences, while the VAE ’s lies between the Potts and site-independent
models. Importantly, our work provides a new framework for evaluating and interpretingGPSM accuracy which emphasizes the role of higher-order covariation and epistasis, withbroader implications for probabilistic sequence models in general.https://doi.org/10.1038/s41467-021-26529-9 OPEN
1Center for Biophysics and Computational Biology, Temple University, Philadelphia 19122, USA.2Institute for Computational Molecular Science, Temple
University, Philadelphia 19122, USA.3Department of Biology, Temple University, Philadelphia 19122, USA.4Center for Hybrid Intelligence, Temple University,
Philadelphia 19122, USA.5Department of Computer & Information Sciences, Temple University, Philadelphia 19122, USA.6Department of Physics, Temple
University, Philadelphia 19122, USA.7Department of Chemistry, Temple University, Philadelphia 19122, USA.✉email: vincenzo.carnevale@temple.edu ;
allan.haldane@temple.edu
NATURE COMMUNICATIONS | (2021) 12:6302 | https://doi.org/10.1038/s41467-021-26529-9 | www.nature.com/naturecommunications 11234567890():,; |
2205.11916.pdf | Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
The University of Tokyo
t.kojima@weblab.t.u-tokyo.ac.jpShixiang Shane Gu
Google Research, Brain Team
Machel Reid
Google Research∗Yutaka Matsuo
The University of TokyoYusuke Iwasawa
The University of Tokyo
Abstract
Pretrained large language models (LLMs) are widely used in many sub-fields of
natural language processing (NLP) and generally known as excellent few-shot
learners with task-specific exemplars. Notably, chain of thought (CoT) prompting,
a recent technique for eliciting complex multi-step reasoning through step-by-
step answer examples, achieved the state-of-the-art performances in arithmetics
and symbolic reasoning, difficult system-2 tasks that do not follow the standard
scaling laws for LLMs. While these successes are often attributed to LLMs’
ability for few-shot learning, we show that LLMs are decent zero-shot reasoners
by simply adding “Let’s think step by step” before each answer. Experimental
results demonstrate that our Zero-shot-CoT, using the same single prompt template,
significantly outperforms zero-shot LLM performances on diverse benchmark
reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SV AMP),
symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date
Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot
examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and
GSM8K from 10.4% to 40.7% with large-scale InstructGPT model (text-davinci-
002), as well as similar magnitudes of improvements with another off-the-shelf
large model, 540B parameter PaLM. The versatility of this single prompt across
very diverse reasoning tasks hints at untapped and understudied fundamental
zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive
capabilities may be extracted by simple prompting. We hope our work not only
serves as the minimal strongest zero-shot baseline for the challenging reasoning
benchmarks, but also highlights the importance of carefully exploring and analyzing
the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning
datasets or few-shot exemplars.
1 Introduction
Scaling up the size of language models has been key ingredients of recent revolutions in natural
language processing (NLP) [Vaswani et al., 2017, Devlin et al., 2019, Raffel et al., 2020, Brown et al.,
2020, Thoppilan et al., 2022, Rae et al., 2021, Chowdhery et al., 2022]. The success of large language
models (LLMs) is often attributed to (in-context) few-shot or zero-shot learning. It can solve various
tasks by simply conditioning the models on a few examples (few-shot) or instructions describing the
task (zero-shot). The method of conditioning the language model is called “prompting” [Liu et al.,
2021b], and designing prompts either manually [Schick and Schütze, 2021, Reynolds and McDonell,
2021] or automatically [Gao et al., 2021, Shin et al., 2020] has become a hot topic in NLP.
∗Work done while at The University of Tokyo.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2205.11916v4 [cs.CL] 29 Jan 2023 |
2308.06259v3.pdf | Published as a conference paper at ICLR 2024
SELF-ALIGNMENT WITH INSTRUCTION BACKTRANS -
LATION
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer
Jason Weston &Mike Lewis
Meta
{xianl,jase,mikelewis}@meta.com
ABSTRACT
We present a scalable method to build a high quality instruction following language
model by automatically labelling human-written text with corresponding instruc-
tions. Our approach, named instruction backtranslation , starts with a language
model finetuned on a small amount of seed data, and a given web corpus. The seed
model is used to construct training examples by generating instruction prompts
for web documents ( self-augmentation ), and then selecting high quality examples
from among these candidates ( self-curation ). This data is then used to finetune
a stronger model. Finetuning LLaMa on two iterations of our approach yields a
model that outperforms all other LLaMa-based models on the Alpaca leaderboard
not relying on distillation data, demonstrating highly effective self-alignment.
1 I NTRODUCTION
Aligning large language models (LLMs) to perform instruction following typically requires finetuning
on large amounts of human-annotated instructions or preferences (Ouyang et al., 2022; Touvron
et al., 2023a; Bai et al., 2022a) or distilling outputs from more powerful models (Wang et al., 2022a;
Honovich et al., 2022; Taori et al., 2023; Chiang et al., 2023; Peng et al., 2023; Xu et al., 2023).
Recent work highlights the importance of human-annotation data quality (Zhou et al., 2023; Köpf
et al., 2023). However, annotating instruction following datasets with such quality is hard to scale.
In this work, we instead leverage large amounts of unlabelled data to create a high quality instruction
tuning dataset by developing an iterative self-training algorithm. The method uses the model itself
to both augment and curate high quality training examples to improve its own performance. Our
approach, named instruction backtranslation , is inspired by the classic backtranslation method from
machine translation, in which human-written target sentences are automatically annotated with
model-generated source sentences in another language (Sennrich et al., 2015).
Our method starts with a seed instruction following model and a web corpus. The model is first used
toself-augment its training set: for each web document, it creates an instruction following training
example by predicting a prompt (instruction) that would be correctly answered by (a portion of) that
document. Directly training on such data (similarly to Köksal et al. (2023)) gives poor results in our
experiments, both because of the mixed quality of human written web text, and noise in the generated
instructions. To remedy this, we show that the same seed model can be used to self-curate the set of
newly created augmentation data by predicting their quality, and can then be self-trained on only the
highest quality (instruction, output) pairs. The procedure is then iterated, using the improved model
to better curate the instruction data, and re-training to produce a better model.
Our resulting model, Humpback , outperforms all other existing non-distilled models on the Alpaca
leaderboard (Li et al., 2023). Overall, instruction backtranslation is a scalable method for enabling
language models to improve their own ability to follow instructions.
2 M ETHOD
Our self-training approach assumes access to a base language model, a small amount of seed data,
and a collection of unlabelled examples, e.g. a web corpus. The unlabelled data is a large, diverse set
1arXiv:2308.06259v3 [cs.CL] 12 Mar 2024 |
2209.12892.pdf | LEARNING TO LEARN WITH GENERATIVE MODELS OF
NEURAL NETWORK CHECKPOINTS
William Peebles∗Ilija Radosavovic∗Tim Brooks Alexei A. Efros Jitendra Malik
University of California, Berkeley
ABSTRACT
We explore a data-driven approach for learning to optimize neural networks. We
construct a dataset of neural network checkpoints and train a generative model on
the parameters. In particular, our model is a conditional diffusion transformer that,
given an initial input parameter vector and a prompted loss, error, or return, predicts
the distribution over parameter updates that achieve the desired metric. At test
time, it can optimize neural networks with unseen parameters for downstream tasks
in just one update. We find that our approach successfully generates parameters
for a wide range of loss prompts. Moreover, it can sample multimodal parameter
solutions and has favorable scaling properties. We apply our method to different
neural network architectures and tasks in supervised and reinforcement learning.
1 I NTRODUCTION
Gradient-based optimization is the fuel of modern deep learning. Techniques of this class, such
as SGD (Robbins & Monro, 1951) and Adam (Kingma & Ba, 2015), are easy to implement, scale
reasonably well and converge to surprisingly good solutions—even in high-dimensional, non-convex
neural network loss landscapes. Over the past decade, they have enabled impressive results in
computer vision (Krizhevsky et al., 2012; Girshick et al., 2014), natural language processing (Vaswani
et al., 2017; Radford et al., 2018) and audio generation (Van Den Oord et al., 2016).
While these manual optimization techniques have led to large advances, they suffer from an important
limitation: they are unable to improve from past experience. For example, SGD will not converge
any faster when used to optimize the same neural network architecture from the same initialization
the 100th time versus the first time. Learned optimizers capable of leveraging their past experiences
have the potential to overcome this limitation and may accelerate future progress in deep learning.
Of course, the concept of learning improved optimizers is not new and dates back to the 1980s, if not
earlier, following early work from Schmidhuber (1987) and Bengio et al. (1991). In recent years, sig-
nificant effort has been spent on designing algorithms that learn via nested meta-optimization, where
the inner loop optimizes the task-level objective and the outer loop learns the optimizer (Andrychow-
icz et al., 2016; Li & Malik, 2016; Finn et al., 2017). In some instances, these approaches outperform
manual optimizers. However, they are challenging to train in practice due to a reliance on unrolled
optimization and reinforcement learning.
Taking a modern deep learning perspective suggests a simple, scalable and data-driven approach to
this problem. Over the past decade, our community has trained a massive number of checkpoints.
These checkpoints contain a wealth of information: diverse parameter configurations and rich metrics
such as test losses, classification errors and RL returns that describe the quality of the checkpoint.
Instead of leveraging large-scale datasets of images or text, we propose learning from large-scale
datasets of checkpoints recorded over the course of many training runs.
To this end, we create a dataset of neural network checkpoints (Figure 1, left). Our dataset consists of
23 million checkpoints from over a hundred thousand training runs. We collect data from supervised
learning tasks (MNIST, CIFAR-10) as well as reinforcement learning tasks (Cartpole), and across
different neural network architectures (MLPs, CNNs). In addition to parameters, we record relevant
task-level metrics in each checkpoint, such as test losses and classification errors.
*Equal contribution. Code, data and pre-trained models are available on our project page.
1arXiv:2209.12892v1 [cs.LG] 26 Sep 2022 |
2023.findings-acl.426.pdf | Findings of the Association for Computational Linguistics: ACL 2023 , pages 6810–6828
July 9-14, 2023 ©2023 Association for Computational Linguistics
“Low-Resource” Text Classification: A Parameter-Free Classification
Method with Compressors
Zhiying Jiang1,2, Matthew Y.R. Yang1, Mikhail Tsirlin1,
Raphael Tang1, Yiqin Dai2and Jimmy Lin1
1University of Waterloo2AFAIK
{zhiying.jiang, m259yang, mtsirlin, r33tang}@uwaterloo.ca
quinn@afaik.io jimmylin@uwaterloo.ca
Abstract
Deep neural networks (DNNs) are often used
for text classification due to their high accu-
racy. However, DNNs can be computationally
intensive, requiring millions of parameters and
large amounts of labeled data, which can make
them expensive to use, to optimize, and to trans-
fer to out-of-distribution (OOD) cases in prac-
tice. In this paper, we propose a non-parametric
alternative to DNNs that’s easy, lightweight,
and universal in text classification: a combi-
nation of a simple compressor like gzip with
ak-nearest-neighbor classifier. Without any
training parameters, our method achieves re-
sults that are competitive with non-pretrained
deep learning methods on six in-distribution
datasets. It even outperforms BERT on all five
OOD datasets, including four low-resource lan-
guages. Our method also excels in the few-shot
setting, where labeled data are too scarce to
train DNNs effectively. Code is available at
https://github.com/bazingagin/npc_gzip.
1 Introduction
Text classification, as one of the most fundamen-
tal tasks in natural language processing (NLP),
has improved substantially with the help of neu-
ral networks (Li et al., 2022). However, most neu-
ral networks are data-hungry, the degree of which
increases with the number of parameters. Hyper-
parameters must be carefully tuned for different
datasets, and the preprocessing of text data (e.g.,
tokenization, stop word removal) needs to be tai-
lored to the specific model and dataset. Despite
their ability to capture latent correlations and rec-
ognize implicit patterns (LeCun et al., 2015), com-
plex deep neural networks may be overkill for sim-
ple tasks such as topic classification, and lighter
alternatives are usually good enough. For exam-
ple, Adhikari et al. (2019b) find that a simple long
short-term memory network (LSTM; Hochreiter
and Schmidhuber, 1997) with appropriate regular-
ization can achieve competitive results. Shen et al.(2018) further show that even word-embedding-
based methods can achieve results comparable to
convolutional neural networks (CNNs) and recur-
rent neural networks (RNNs).
Among all the endeavors for a lighter alternative
to DNNs, one stream of work focuses on using com-
pressors for text classification. There have been
several studies in this field (Teahan and Harper,
2003; Frank et al., 2000), most of them based on
the intuition that the minimum cross entropy be-
tween a document and a language model of a class
built by a compressor indicates the class of the
document. However, previous works fall short of
matching the quality of neural networks.
Addressing these shortcomings, we propose a
text classification method combining a lossless
compressor, a compressor-based distance metric
with a k-nearest-neighbor classifier ( kNN). It uti-
lizes compressors in capturing regularity, which
is then translated into similarity scores by a
compressor-based distance metric. With the re-
sulting distance matrix, we use kNN to perform
classification. We carry out experiments on seven
in-distribution datasets and five out-of-distribution
ones. With a simple compressor like gzip, our
method achieves results competitive with those of
DNNs on six out of seven datasets and outperforms
all methods including BERT on all OOD datasets.
It also surpasses all models by a large margin under
few-shot settings.
Our contributions are as follows: (1) we are the
first to use NCD with kNN for topic classifica-
tion, allowing us to carry out comprehensive ex-
periments on large datasets with compressor-based
methods; (2) we show that our method achieves
results comparable to non-pretrained DNNs on six
out of seven in-distribution datasets; (3) on OOD
datasets, we show that our method outperforms
all methods, including pretrained models such as
BERT; and (4) we demonstrate that our method ex-
cels in the few-shot setting of scarce labeled data.6810 |
1911.00172.pdf | Published as a conference paper at ICLR 2020
GENERALIZATION THROUGH MEMORIZATION :
NEAREST NEIGHBOR LANGUAGE MODELS
Urvashi Khandelwal†∗, Omer Levy‡, Dan Jurafsky†, Luke Zettlemoyer‡& Mike Lewis‡
†Stanford University
‡Facebook AI Research
{urvashik,jurafsky }@stanford.edu
{omerlevy,lsz,mikelewis }@fb.com
ABSTRACT
We introduce kNN-LMs, which extend a pre-trained neural language model (LM)
by linearly interpolating it with a k-nearest neighbors ( kNN) model. The near-
est neighbors are computed according to distance in the pre-trained LM embed-
ding space, and can be drawn from any text collection, including the original LM
training data. Applying this augmentation to a strong W IKITEXT -103 LM, with
neighbors drawn from the original training set, our kNN-LM achieves a new state-
of-the-art perplexity of 15.79 – a 2.9 point improvement with no additional train-
ing. We also show that this approach has implications for efficiently scaling up to
larger training sets and allows for effective domain adaptation, by simply varying
the nearest neighbor datastore, again without further training. Qualitatively, the
model is particularly helpful in predicting rare patterns, such as factual knowl-
edge. Together, these results strongly suggest that learning similarity between se-
quences of text is easier than predicting the next word, and that nearest neighbor
search is an effective approach for language modeling in the long tail.
1 I NTRODUCTION
Neural language models (LMs) typically solve two subproblems: (1) mapping sentence prefixes to
fixed-sized representations, and (2) using these representations to predict the next word in the text
(Bengio et al., 2003; Mikolov et al., 2010). We present a new language modeling approach that is
based on the hypothesis that the representation learning problem may be easier than the prediction
problem. For example, any English speaker knows that Dickens is the author of andDickens wrote
will have essentially the same distribution over the next word, even if they do not know what that
distribution is. We provide strong evidence that existing language models, similarly, are much better
at the first problem, by using their prefix embeddings in a simple nearest neighbor scheme that
significantly improves overall performance.
We introduce kNN-LM, an approach that extends a pre-trained LM by linearly interpolating its next
word distribution with a k-nearest neighbors ( kNN) model. The nearest neighbors are computed
according to distance in the pre-trained embedding space and can be drawn from any text collec-
tion, including the original LM training data. This approach allows rare patterns to be memorized
explicitly, rather than implicitly in model parameters. It also improves performance when the same
training data is used for learning the prefix representations and the kNN model, strongly suggesting
that the prediction problem is more challenging than previously appreciated.
To better measure these effects, we conduct an extensive empirical evaluation. Applying our kNN
augmentation to a strong W IKITEXT -103 LM using only the original dataset achieves a new state-
of-the-art perplexity of 15.79 – a 2.86 point improvement over the base model (Baevski & Auli,
2019) – with no additional training. We also show that the approach has implications for efficiently
scaling up to larger training sets and allows for effective domain adaptation, by simply varying the
nearest neighbor datastore. Training a model on 100-million tokens and using kNN search over a
3-billion token dataset can outperform training the same model on all 3-billion tokens, opening a
∗Work done while the first author was interning at Facebook AI Research.
1arXiv:1911.00172v2 [cs.CL] 15 Feb 2020 |
2024.03.18.585544v1.full.pdf | 1
Towards Interpretable Cryo-EM: Disentangling
Latent Spaces of Molecular Conformations
David A. Klindt1,2,∗, Aapo Hyv ¨arinen3, Axel Levy1,4, Nina Miolane2and
Fr´ed´eric Poitevin1
1LCLS, SLAC National Accelerator Laboratory, Stanford University, CA, USA
2Department of Electrical and Computer Engineering, UCSB, CA, USA
3Department of Computer Science, University of Helsinki, Finland
4Department of Electrical Engineering, Stanford, CA, USA
Correspondence*:
David A. Klindt
klindt.david@gmail.com
ABSTRACT2
Molecules are essential building blocks of life and their different conformations (i.e., shapes) 3
crucially determine the functional role that they play in living organisms. Cryogenic Electron4
Microscopy (cryo-EM) allows for acquisition of large image datasets of individual molecules.5
Recent advances in computational cryo-EM have made it possible to learn latent variable models6
of conformation landscapes. However, interpreting these latent spaces remains a challenge7
as their individual dimensions are often arbitrary. The key message of our work is that this8
interpretation challenge can be viewed as an Independent Component Analysis (ICA) problem9
where we seek models that have the property of identifiability. That means, they have an10
essentially unique solution, representing a conformational latent space that separates the 11
different degrees of freedom a molecule is equipped with in nature. Thus, we aim to advance 12
the computational field of cryo-EM beyond visualizations as we connect it with the theoretical 13
framework of (nonlinear) ICA and discuss the need for identifiable models, improved metrics, and 14
benchmarks. Moving forward, we propose future directions for enhancing the disentanglement 15
of latent spaces in cryo-EM, refining evaluation metrics and exploring techniques that leverage 16
physics-based decoders of biomolecular systems. Moreover, we discuss how future technological 17
developments in time-resolved single particle imaging may enable the application of nonlinear ICA 18
models that can discover the true conformation changes of molecules in nature. The pursuit of 19
interpretable conformational latent spaces will empower researchers to unravel complex biological 20
processes and facilitate targeted interventions. This has significant implications for drug discovery 21
and structural biology more broadly. More generally, latent variable models are deployed widely 22
across many scientific disciplines. Thus, the argument we present in this work has much broader 23
applications in AI for science if we want to move from impressive nonlinear neural network models 24
to mathematically grounded methods that can help us learn something new about nature. 25
Keywords: cryo-EM, machine learning, ICA, AI for science, disentanglement, physics-based models 26
1(which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint this version posted March 19, 2024. ; https://doi.org/10.1101/2024.03.18.585544doi: bioRxiv preprint |
2309.03649.pdf | Exploring kinase DFG loop conformational
stability with AlphaFold2-RAVE
Bodhi P. Vani,†Akashnathan Aranganathan,‡and Pratyush Tiwary∗,¶,§
†Institute for Physical Science and Technology, University of Maryland, College Park,
Maryland 20742, USA
‡Biophysics Program and Institute for Physical Science and Technology, University of
Maryland, College Park 20742, USA
¶Department of Chemistry and Biochemistry and Institute for Physical Science and
Technology, University of Maryland, College Park 20742, USA
§Corresponding author
E-mail: ptiwary@umd.edu
Abstract
Kinases compose one of the largest fractions of the human proteome, and their
misfunction is implicated in many diseases, in particular cancers. The ubiquitousness
and structural similarities of kinases makes specific and effective drug design difficult.
In particular, conformational variability due to the evolutionarily conserved DFG mo-
tif adopting in and out conformations and the relative stabilities thereof are key in
structure-based drug design for ATP competitive drugs. These relative conformational
stabilities are extremely sensitive to small changes in sequence, and provide an impor-
tant problem for sampling method development. Since the invention of AlphaFold2, the
world of structure-based drug design has noticably changed. In spite of it being limited
to crystal-like structure prediction, several methods have also leveraged its underlying
1arXiv:2309.03649v1 [physics.bio-ph] 7 Sep 2023 |
NIPS-2007-active-preference-learning-with-discrete-choice-data-Paper.pdf | Active Preference Learning with Discrete Choice Data
Eric Brochu, Nando de Freitas and Abhijeet Ghosh
Department of Computer Science
University of British Columbia
Vancouver, BC, Canada
{ebrochu, nando, ghosh}@cs.ubc.ca
Abstract
We propose an active learning algorithm that learns a continuous valuation model
from discrete preferences. The algorithm automatically decides what items are
best presented to an individual in order to find the item that they value highly in
as few trials as possible, and exploits quirks of human psychology to minimize
time and cognitive burden. To do this, our algorithm maximizes the expected
improvement at each query without accurately modelling the entire valuation sur-
face, which would be needlessly expensive. The problem is particularly difficult
because the space of choices is infinite. We demonstrate the effectiveness of the
new algorithm compared to related active learning methods. We also embed the
algorithm within a decision making tool for assisting digital artists in rendering
materials. The tool finds the best parameters while minimizing the number of
queries.
1 Introduction
A computer graphics artist sits down to use a simple renderer to find appropriate surfaces for a
typical reflectance model. It has a series of parameters that must be set to control the simulation:
“specularity”, “Fresnel reflectance coefficient”, and other, less-comprehensible ones. The parame-
ters interact in ways difficult to discern. The artist knows in his mind’s eye what he wants, but he’s
not a mathematician or a physicist — no course he took during his MFA covered Fresnel reflectance
models. Even if it had, would it help? He moves the specularity slider and waits for the image
to be generated. The surface is too shiny. He moves the slider back a bit and runs the simulation
again. Better. The surface is now appropriately dull, but too dark. He moves a slider down. Now
it’s the right colour, but the specularity doesn’t look quite right any more. He repeatedly bumps the
specularity back up, rerunning the renderer at each attempt until it looks right. Good. Now, how to
make it look metallic...?
Problems in simulation, animation, rendering and other areas often take such a form, where the
desired end result is identifiable by the user, but parameters must be tuned in a tedious trial-and-
error process. This is particularly apparent in psychoperceptual models, where continual tuning is
required to make something “look right”. Using the animation of character walking motion as an
example, for decades, animators and scientists have tried to develop objective functions based on
kinematics, dynamics and motion capture data [Cooper et al., 2007 ]. However, even when expen-
sive mocap is available, we simply have to watch an animated film to be convinced of how far we
still are from solving the gait animation problem. Unfortunately, it is not at all easy to find a mapping
from parameterized animation to psychoperceptual plausibility. The perceptual objective function is
simply unknown. Fortunately, however, it is fairly easy to judge the quality of a walk — in fact, it is
trivial and almost instantaneous. The application of this principle to animation and other psychoper-
ceptual tools is motivated by the observation that humans often seem to be forming a mental model
of the objective function. This model enables them to exploit feasible regions of the parameter space
where the valuation is predicted to be high and to explore regions of high uncertainty. It is our the-
1 |
2206.14858.pdf | Solving Quantitative Reasoning Problems with
Language Models
Aitor Lewkowycz∗, Anders Andreassen†, David Dohan†, Ethan Dyer†, Henryk Michalewski†,
Vinay Ramasesh†, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo,
Yuhuai Wu, Behnam Neyshabur∗, Guy Gur-Ari∗, and Vedant Misra∗
Google Research
Abstract
Language models have achieved remarkable performance on a wide range of tasks that require natural
language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that
require quantitative reasoning, such as solving mathematics, science, and engineering problems at the
college level. To help close this gap, we introduce Minerva, a large language model pretrained on general
natural language data and further trained on technical content. The model achieves state-of-the-art
performance on technical benchmarks without the use of external tools. We also evaluate our model
on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other
sciences that require quantitative reasoning, and find that the model can correctly answer nearly a third
of them.
1 Introduction
Artificial neural networks have seen remarkable success in a variety of domains including computer vision,
speech recognition, audio and image generation, translation, game playing, and robotics. In particular, large
language models have achieved excellent performance across a variety of natural language tasks including
common-sense reasoning, question answering, and summarization (Raffel et al., 2019; Brown et al., 2020;
Rae et al., 2021; Smith et al., 2022; Chowdhery et al., 2022). However, these models have struggled with
tasks that require quantitative reasoning, such as solving mathematics, science, and engineering problems
(Hendrycks et al., 2021; Cobbe et al., 2021).
Quantitative reasoning problems are an interesting domain of application for language models because they
test the capability of models on several fronts. They require the solver to correctly parse a natural language
input, potentially recall world knowledge that pertains to the problem, and apply an algorithm or series of
computations to the information provided in order to arrive at a correct solution. They also require that the
solver is able to correctly parse and generate precise sequences of mathematical tokens, as well as apply a
computational procedure to tokens via symbolic and numerical manipulation. Finally, such problems are a
proving ground for research toward robust quantitative reasoning solvers that are useful in supporting the
work of humans in scientific and technical fields.
Previous research has shown that large language models achieve impressive performance on math and
programming questions after training on domain specific datasets (Chen et al., 2021; Austin et al., 2021;
∗Equal leadership and advising contribution
†Equal contribution
1arXiv:2206.14858v2 [cs.CL] 1 Jul 2022 |
1909.12264.pdf | Quantum Graph Neural Networks
Guillaume Verdon
X, The Moonshot Factory
Mountain View, CA
gverdon@x.teamTrevor McCourt
Google Research
Venice, CA
trevormccrt@google.com
Enxhell Luzhnica, Vikash Singh,
Stefan Leichenauer, Jack Hidary
X, The Moonshot Factory
Mountain View, CA
{enxhell,singvikash,
sleichenauer,hidary}@x.team
Abstract
We introduce Quantum Graph Neural Networks ( QGNN ), a new class of quantum
neural network ansatze which are tailored to represent quantum processes which
have a graph structure, and are particularly suitable to be executed on distributed
quantum systems over a quantum network. Along with this general class of ansatze,
we introduce further specialized architectures, namely, Quantum Graph Recurrent
Neural Networks ( QGRNN ) and Quantum Graph Convolutional Neural Networks
(QGCNN ). We provide four example applications of QGNN s: learning Hamiltonian
dynamics of quantum systems, learning how to create multipartite entanglement in
a quantum network, unsupervised learning for spectral clustering, and supervised
learning for graph isomorphism classification.
1 Introduction
Variational Quantum Algorithms are a promising class of algorithms that are rapidly emerging
as a central subfield of Quantum Computing [ 1,2,3]. Similar to parameterized transformations
encountered in deep learning, these parameterized quantum circuits are often referred to as Quantum
Neural Networks (QNNs). Recently, it was shown that QNNs that have no prior on their structure
suffer from a quantum version of the no-free lunch theorem [ 4] and are exponentially difficult to
train via gradient descent. Thus, there is a need for better QNN ansatze. One popular class of
QNNs has been Trotter-based ansatze [ 2,5]. The optimization of these ansatze has been extensively
studied in recent works, and efficient optimization methods have been found [ 6,7]. On the classical
side, graph-based neural networks leveraging data geometry have seen some recent successes in
deep learning, finding applications in biophysics and chemistry [ 8]. Inspired from this success, we
propose a new class of Quantum Neural Network ansatz which allows for both quantum inference
and classical probabilistic inference for data with a graph-geometric structure. In the sections below,
we introduce the general framework of the QGNN ansatz as well as several more specialized variants
and showcase four potential applications via numerical implementation.
Preprint. Under review.arXiv:1909.12264v1 [quant-ph] 26 Sep 2019 |
2403.08763.pdf | Simple and Scalable Strategies to Continually Pre-train
Large Language Models
Adam Ibrahim∗†⊚ibrahima@mila.quebec
Benjamin Thérien∗†⊚benjamin.therien@mila.quebec
Kshitij Gupta∗†⊚kshitij.gupta@mila.quebec
Mats L. Richter†⊚mats.richter@mila.quebec
Quentin Anthony♢†⊚qubitquentin@gmail.com
Timothée Lesort†⊚t.lesort@gmail.com
Eugene Belilovsky‡⊚eugene.belilovsky@concordia.ca
Irina Rish†⊚irina.rish@umontreal.ca
Department of Computer Science and Operation Research,
Université de Montréal, Montréal, Canada †
Department of Computer Science and Software Engineering,
Concordia University, Montréal, Canada ‡
Mila, Montréal, Canada ⊚
EleutherAI ♢
Abstract
Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start
the process over again once new data becomes available. A much more efficient solution is
to continually pre-train these models – saving significant compute compared to re-training.
However, the distribution shift induced by new data typically results in degraded performance
on previous data or poor adaptation to the new data. In this work, we show that a simple and
scalable combination of learning rate (LR) re-warming, LR re-decaying, and replay of previous
data is sufficient to match the performance of fully re-training from scratch on all available
data, as measured by final loss and language model (LM) evaluation benchmarks. Specifically,
we show this for a weak but realistic distribution shift between two commonly used LLM
pre-training datasets (English →English) and a stronger distribution shift (English →German)
at the 405M parameter model scale with large dataset sizes (hundreds of billions of tokens).
Selecting the weak but realistic shift for larger-scale experiments, we also find that our
continual learning strategies match the re-training baseline for a 10B parameter LLM. Our
results demonstrate that LLMs can be successfully updated via simple and scalable continual
learning strategies, matching the re-training baseline using only a fraction of the compute.
Finally, inspired by previous work, we propose alternatives to the cosine learning rate schedule
that help circumvent forgetting induced by LR re-warming and that are not bound to a fixed
token budget.
1 Introduction
Over the past few years, large pre-trained models have enabled massive performance improvements in
language modeling (Brown et al., 2020; Zhao et al., 2023), visual understanding (Radford et al., 2021; Alayrac
et al., 2022; Kirillov et al., 2023), text-to-image generation (Rombach et al., 2022; Pernias et al., 2024), and
text-to-video generation (Brooks et al., 2024)—to name a few. Large language models (LLMs) are at the
center of all these improvements, providing an intuitive means for humans to interface with machine learning
algorithms through language.
∗Equal contribution; authorship order within equal contributors was randomized.
1arXiv:2403.08763v1 [cs.LG] 13 Mar 2024 |
2310.02226.pdf | Think before you speak:
Training Language Models With Pause Tokens
Sachin Goyal∗
Machine Learning Department
Carnegie Mellon University
sachingo@andrew.cmu.eduZiwei Ji
Google Research, NY
ziweiji@google.comAnkit Singh Rawat
Google Research, NY
ankitsrawat@google.com
Aditya Krishna Menon
Google Research, NY
adityakmenon@google.comSanjiv Kumar
Google Research, NY
sanjivk@google.comVaishnavh Nagarajan
Google Research, NY
vaishnavh@google.com
Abstract
Language models generate responses by producing a series of tokens in immediate
succession: the (K+ 1)thtoken is an outcome of manipulating Khidden vectors
per layer, one vector per preceding token. What if instead we were to let the model
manipulate say, K+10 hidden vectors, before it outputs the (K+1)thtoken? We
operationalize this idea by performing training and inference on language mod-
els with a (learnable) pause token, a sequence of which is appended to the input
prefix. We then delay extracting the model’s outputs until the last pause token is
seen, thereby allowing the model to process extra computation before committing
to an answer. We empirically evaluate pause-training on decoder-only models
of 1B and 130M parameters with causal pretraining on C4, and on downstream
tasks covering reasoning, question-answering, general understanding and fact re-
call. Our main finding is that inference-time delays show gains on our tasks when
the model is both pre-trained and finetuned with delays. For the 1B model, we
witness gains on eight tasks, most prominently, a gain of 18% EM score on the
QA task of SQuAD, 8%on CommonSenseQA and 1%accuracy on the reason-
ing task of GSM8k. Our work raises a range of conceptual and practical future
research questions on making delayed next-token prediction a widely applicable
new paradigm.
1 Introduction
Transformer-based causal language models generate tokens one after the other in immediate succes-
sion. To generate the (K+ 1)thtoken, the model consumes the Kprevious tokens, and proceeds
layer by layer, computing Kintermediate vectors in each hidden layer. Each vector in itself is the
output of a module (consisting of self-attention and multi-layer-perceptrons) operating on the pre-
vious layer’s output vectors. However sophisticated this end-to-end process may be, it abides by a
peculiar constraint: the number of operations determining the next token is limited by the number
of tokens seen so far. Arguably, this was the most natural design choice when the Transformer was
first conceived by Vaswani et al. (2017). But in hindsight, one may wonder whether for some inputs,
the(K+ 1)thtoken demands K+MTransformer operations in each layer (for M > 0), which
cannot be met by the arbitrarily constrained Koperations per layer. This paper explores one way to
free the Transformer of this arbitrary per-layer computational constraint.
The approach we study is to append dummy tokens into a decoder-only model’s input, thereby de-
laying the model’s output. Specifically, we select a (learnable) pause token (denoted <pause> ) and
append one or more copies of <pause> as a sequence to the input. We simply ignore the model’s cor-
responding outputs until the last <pause> token is seen, after which we begin extracting its response.
∗Work done in part as a Student Researcher at Google.
1arXiv:2310.02226v1 [cs.CL] 3 Oct 2023 |
2212.00178.pdf | Open Relation and Event Type Discovery with Type Abstraction
Sha Li, Heng Ji, Jiawei Han
University of Illinois Urbana-Champaign
{shal2, hengji, hanj}@illinois.edu
Abstract
Conventional “closed-world" information ex-
traction (IE) approaches rely on human ontolo-
gies to define the scope for extraction. As
a result, such approaches fall short when ap-
plied to new domains. This calls for systems
that can automatically infer new types from
given corpora, a task which we refer to as type
discovery . To tackle this problem, we intro-
duce the idea of type abstraction, where the
model is prompted to generalize and name the
type. Then we use the similarity between in-
ferred names to induce clusters. Observing
that this abstraction-based representation is of-
ten complementary to the entity/trigger token
representation, we set up these two represen-
tations as two views and design our model as
a co-training framework. Our experiments on
multiple relation extraction and event extrac-
tion datasets consistently show the advantage
of our type abstraction approach.
1 Introduction
Information extraction has enjoyed widespread suc-
cess, however, the majority of information extrac-
tion methods are “reactive”, relying on end-users
to specify their information needs in prior and pro-
vide supervision accordingly. This leads to “closed-
world” systems (Lin et al., 2020; Du and Cardie,
2020; Li et al., 2021; Zhong and Chen, 2021; Ye
et al., 2022) that are confined to a set of pre-defined
types. It is desirable to make systems act more
“proactively” like humans who are always on the
lookout for interesting new information, generalize
them into new types, and find more instances of
such types, even if they are not seen previously.
One related attempt is the Open Information Ex-
traction paradigm (Banko et al., 2008), which aims
at extracting all (subject, predicate, object) triples
from text that denote some kind of relation. While
OpenIE does not rely on pre-specified relations,
its exhaustive and free-form nature often leads to
noisy and redundant extractions.
<h>John</h> earned a bachelor’s degree from the <t>University of Wollongong</t>.Token ViewUniversity of Wollongong is the [MASK] of John. Mask ViewRelation: School_AttendedFigure 1: For each instance, the token view is computed
from the pre-trained LM embedding of the first token
in entity/trigger. The mask view is computed from the
[MASK] token embedding in the type prompt.
To bridge the gap between closed-world IE and
OpenIE, a vital step is for systems to possess the
ability of automatically inducing new types and
extracting instances of such new types. Under vari-
ous contexts, related methods have been proposed
under the name of “relation discovery” (Yao et al.,
2011; Marcheggiani and Titov, 2016),“open rela-
tion extraction” (Wu et al., 2019; Hu et al., 2020)
and “event type induction” (Huang and Ji, 2020;
Shen et al., 2021). In this paper, we unify such
terms and refer to the task as type discovery .
Type discovery can naturally be posed as a clus-
tering task. This heavily relies on defining an appro-
priate metric space where types are easily separable.
The token embedding space from pre-trained lan-
guage models is a popular choice, but as observed
by (Zhao et al., 2021), the original metric space
derived from BERT (Devlin et al., 2019) is often
prone to reflect surface form similarity rather than
the desired relation/event-centered similarity. One
way to alleviate this issue is to use known types
to help learn a similarity metric that can also be
applied to unknown types (Wu et al., 2019; Zhao
et al., 2021; Huang and Ji, 2020).
In this paper we introduce another idea of ab-
straction : a discovered type should have an ap-
propriate and concise type name. The human vo-
cabulary serves as a good repository of concepts
that appear meaningful to people. When we assign
a name to a cluster, we implicitly define the com-arXiv:2212.00178v1 [cs.CL] 30 Nov 2022 |
10.1016.j.cell.2023.12.037.pdf | Article
Xist ribonucleoproteins promote female sex-biased
autoimmunity
Graphical abstract
Highlights
dTransgenic mouse models inducibly express Xist in male
animals
dXist expression in males induces autoantibodies andautoimmune pathology
dXist in males reprograms T and B cell populations to female-like patterns
dAutoantibodies to Xist RNP characterize female-biasedautoimmune diseases in patientsAuthors
Diana R. Dou, Yanding Zhao,Julia A. Belk, ..., Anton Wutz, Paul J. Utz,Howard Y. Chang
Correspondence
howchang@stanford.edu
In brief
The Xist RNA protein complex, presentonly in females, is immunogenic and mayunderlie female-biased autoimmunity.
Dou et al., 2024, Cell 187, 733–749
February 1, 2024 ª2024 The Authors. Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.037 ll
|
2012.02296v2.pdf | Generative Capacity of Probabilistic Protein
Sequence Models
Francisco McGee1,2,4, Quentin Novinger2,5, Ronald M Levy1,3,4,6, Vincenzo Carnevale2,3,*,
and Allan Haldane1,6,*
1Center for Biophysics and Computational Biology, Temple University, Philadelphia, 19122, USA
2Institute for Computational Molecular Science, Temple University, Philadelphia, 19122, USA
3Department of Biology, Temple University, Philadelphia, 19122, USA
4Department of Chemistry, Temple University, Philadelphia, 19122, USA
5Department of Computer & Information Sciences, Temple University, Philadelphia, 19122, USA
6Department of Physics, Temple University, Philadelphia, 19122, USA
*Corresponding authors: vincenzo.carnevale@temple.edu, allan.haldane@temple.edu
ABSTRACT
Potts models and variational autoencoders (VAEs) have recently gained popularity as generative protein sequence models
(GPSMs) to explore fitness landscapes and predict the effect of mutations. Despite encouraging results, quantitative characteri-
zation and comparison of GPSM-generated probability distributions is still lacking. It is currently unclear whether GPSMs can
faithfully reproduce the complex multi-residue mutation patterns observed in natural sequences arising due to epistasis. We
develop a set of sequence statistics to assess the “generative capacity” of three GPSMs of recent interest: the pairwise Potts
Hamiltonian, the VAE, and the site-independent model, using natural and synthetic datasets. We show that the generative
capacity of the Potts Hamiltonian model is the largest; the higher order mutational statistics generated by the model agree with
those observed for natural sequences. In contrast, we show that the VAE’s generative capacity lies between the pairwise Potts
and site-independent models. Importantly, our work measures GPSM generative capacity in terms of higher-order sequence
covariation statistics which we have developed, and provides a new framework for evaluating and interpreting GPSM accuracy
that emphasizes the role of epistasis.
Introduction
Recent progress in decoding the patterns of mutations in protein multiple sequence alignments (MSAs) has
highlighted the importance of mutational covariation in determining protein function, conformation and evolution,
and has found practical applications in protein design, drug design, drug resistance prediction, and classification1–3.
These developments were sparked by the recognition that the pairwise covariation of mutations observed in large
MSAs of evolutionarily diverged sequences belonging to a common protein family can be used to fit maximum
entropy “Potts” statistical models4–6. These contain pairwise statistical interaction parameters reflecting epistasis7
between pairs of positions. Such models have been shown to accurately predict physical contacts in protein
structure6,8–10, and have been used to significantly improve the prediction of the fitness effect of mutations to a
sequence compared to site-independent sequence variation models which do not account for covariation11,12.
They are “generative” in the sense that they define the probability, p(S), that a protein sequence Sresults from the
evolutionary process. Intriguingly, the probability distribution p(S)can be used to sample unobserved, and yet viable,
artificial sequences. In practice, the model distribution p(S)depends on parameters that are found by maximizing a
suitably defined likelihood function on observations provided by the MSA of a target protein family. As long as the
model is well specified and generalizes from the training MSA, it can then be used to generate new sequences, and
thus a new MSA whose statistics should match those of the original target protein family. We refer to probabilistic
models that create new protein sequences in this way as generative protein sequence models (GPSMs).
The fact that Potts maximum entropy models are limited to pairwise epistatic interaction terms and have a simple
functional form for p(S)raises the possibility that their functional form is not flexible enough to describe the data,
i.e. that the model is not well specified. While a model with only pairwise interaction terms can predict complex
patterns of covariation involving three or more positions through chains of pairwise interactions, it cannot model
certain triplet and higher patterns of covariation that require a model with more than pairwise interaction terms13.
For example, a Potts model cannot predict patterns described by an XOR or boolean parity function in which the
1arXiv:2012.02296v2 [cs.LG] 15 Mar 2021 |
2401.00368.pdf | Improving Text Embeddings with
Large Language Models
Liang Wang∗, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei
Microsoft Corporation
https://aka.ms/GeneralAI
Abstract
In this paper, we introduce a novel and simple method for obtaining high-quality
text embeddings using only synthetic data and less than 1k training steps. Unlike
existing methods that often depend on multi-stage intermediate pre-training with
billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled
datasets, our method does not require building complex training pipelines or relying
on manually collected datasets that are often constrained by task diversity and
language coverage. We leverage proprietary LLMs to generate diverse synthetic
data for hundreds of thousands of text embedding tasks across nearly 100languages.
We then fine-tune open-source decoder-only LLMs on the synthetic data using
standard contrastive loss. Experiments demonstrate that our method achieves
strong performance on highly competitive text embedding benchmarks without
using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic
and labeled data, our model sets new state-of-the-art results on the BEIR and
MTEB benchmarks.
1 Introduction
Text embeddings are vector representations of natural language that encode its semantic information.
They are widely used in various natural language processing (NLP) tasks, such as information
retrieval (IR), question answering, semantic textual similarity, bitext mining, item recommendation,
etc. In the field of IR, the first-stage retrieval often relies on text embeddings to efficiently recall
a small set of candidate documents from a large-scale corpus using approximate nearest neighbor
search techniques. Embedding-based retrieval is also a crucial component of retrieval-augmented
generation (RAG) [ 21], which is an emerging paradigm that enables large language models (LLMs)
to access dynamic external knowledge without modifying the model parameters. Source attribution
of generated text is another important application of text embeddings [ 14] that can improve the
interpretability and trustworthiness of LLMs.
Previous studies have demonstrated that weighted average of pre-trained word embeddings [ 35,1]
is a strong baseline for measuring semantic similarity. However, these methods fail to capture the
rich contextual information of natural language. With the advent of pre-trained language models
[11], Sentence-BERT [ 37] and SimCSE [ 13] have been proposed to learn text embeddings by fine-
tuning BERT on natural language inference (NLI) datasets. To further enhance the performance and
robustness of text embeddings, state-of-the-art methods like E5 [ 46] and BGE [ 48] employ a more
complex multi-stage training paradigm that first pre-trains on billions of weakly-supervised text pairs,
and then fine-tunes on several labeled datasets.
Existing multi-stage approaches suffer from several drawbacks. Firstly, they entail a complex
multi-stage training pipeline that demands substantial engineering efforts to curate large amounts
∗Correspondence to {wangliang,nanya,fuwei}@microsoft.com
Technical Report.arXiv:2401.00368v2 [cs.CL] 19 Jan 2024 |
More-Is-Different-Anderson.pdf | The reductionist hypothesis may still
lbe a topic for controversy among phi-
losophers, but among the great majority
of active scientists I think it is accepted
without question The workings of our
minds and bodles, and of all the ani-
mate or lnanimate matter of which we
have any detailed knowledges are as
sumed to be controlled by the same set
o£ fundamental laws which except
under certain extreme conditions we
feel we know pretty well.
It seems inevitable to go on unerit-
ically to what appears at first sight to
be- an obvious corollary of reduction
ism: that if everything obeys the same
fundamental laws, then the only sci
entists who are studying anything really
fundamental are those who are working
on those laws. In practice, that amounts
to some astrophysicists, some elemen-
tary particle physicists, some logicians
and other mathematicians, and few
others. This point of view, which it is
the main purpose of this article to
oppose, is expressed in a rather well-
known passage by Weisskopf (1):
Looking at the development of science
in thP Twentieth' Century one can dis
tinguish two trends, which I will call
sSintensive and "extensive" research, lack-
ing a better 'terminology. In short: in-
tensive research goes for the fundamental
laws, extensive research goes for the ex-
The author is a member of the technlical staff
of the Bell Telephone Laboratories, Murray Hill,
New Je1 sey 07974, and visiting professor of
theoretical physics at Cavendish Laboratory,
Cambridge, England. This article is an expanded
version of a Regents' Lecture given in 1967 at
the University of California, La Jolla.
4 AUGUST 1972 4 August 1972, Volume 177, Number 4047
less relevance they seem to have to the
very real problems of the rest of sci-
ence, much less to those of society.
The constructionist hypothesis breaks
down when confronted with the twin
difficulties of scale and complexity. The
behavior of large and complex aggre-
gates of elementary particles, it turns
out, is not to be understood in terms
of a simple extrapolation of the prop-
erties of a few particles. Instead, at
each level of complexity entirely new
properties appear, and the understand-
ing of the new behaviors requires re-
search which I think is as fundamental
in its nature as any other. That is, it
seems to me that one may array the
sciences roughly linearly in a hierarchy,
according to the idea The elementary
entities of science X obey the laws of
science Y planatlon - of phenomena ;n terms of
lnown fundamental laws. As always, dis-
tinotions of this kind are not unambiguous,
but they are clear in most cases. Solid
state physics, plasma physics, and perhaps
also biology are extensivee High energy
physics and a good part of nuclear physics
are intensive. There is always much less
intensive research going on than extensive.
Once new fundamental laws are discov-
ereds a large and ever increasing activity
begins in order to apply the discoveries to
hitherto unexplained phenomena. Thus,
there are two dimensions to basic re-
search The frontier of science extends all
along a long line from the newest and most
modern intenslve research5 over the ex-
tensive research recently spawned by the
intensive research of yesterday, to the
broad and well developed web of exten-
sive research activities based on mtensive
research of past decades.
The effectiveness of this message may
be indicated by the fact that I heard it
quoted recently by a leader in the field
of materials science, who urged the
participants at a meeting dedicated to
"fundamental problems in condensed
matter physics" to accept that there
were few or no such problems and that
nothing was left but extensive scienceS
which he seemed to equate with device
. @
englneerlng.
The main fallacy in this kind of
thinking is that the reductionist hypoth-
esis does not by any rneans imply a
"constructionist" one: The ability to
reduce everything to simple fundamen-
tal laws does not imply the ability to
start from those laws and reconstruct
the universe. In fact, the more the ele--
mentary particle physicists tell us about
the nature of the fundamental laws the Xsolid state or
many-body physics
chemistry
mo-lecular biology
cell biology
.
. *
psychology
* . . soclal sclences y
elementary particle
physics
many-body physics
chemistry
molecular biology
*
physlology
psychology
But this hierarchy does not imply
that science X is "just applied Y*" At
each stage entirely new laws, concepts,
and generalizations are necessary, re-
qulring inspiration and creativity to just
as great a degree as in the previous one.
Psychology is not applied biology, nor
s biology applied chemistry.
In my own field of many-body physB
ics, we are, perhaps, closer to our fun
damental, intensive underpinnings than
in any other science in which non-
trivial complexities occur, and as a re-
sult we have begun to formulate a
general theory of just how this shift
from quantitative to qualitative differ-
entiation takes place. This formulation,
called the theory of "broken sym-
metry," may be of help in making more
generally clear the breakdown of the
constructionist converse of reduction-
ism. I will give an elementary and in
complete explanation of these ideas, and
then go on to some more general spec-
ulative comments about analogies at
393 SCIE:NC1S
More Is Different
Broken symmetry and the nature of
the hierarchical structure of science
P. W. Anderson |
2002.11557v1.pdf | Query-Efficient Correlation Clustering
David García–Soriano
d.garcia.soriano@isi.it
ISI Foundation
Turin, ItalyKonstantin Kutzkov
kutzkov@gmail.com
Amalfi Analytics
Barcelona, Spain
Francesco Bonchi
francesco.bonchi@isi.it
ISI Foundation, Turin, Italy
Eurecat, Barcelona, SpainCharalampos Tsourakakis
ctsourak@bu.edu
Boston University
USA
ABSTRACT
Correlation clustering is arguably the most natural formulation of
clustering. Given nobjects and a pairwise similarity measure, the
goal is to cluster the objects so that, to the best possible extent,
similar objects are put in the same cluster and dissimilar objects
are put in different clusters.
A main drawback of correlation clustering is that it requires
as input the Θ(n2)pairwise similarities. This is often infeasible
to compute or even just to store. In this paper we study query-
efficient algorithms for correlation clustering. Specifically, we devise
a correlation clustering algorithm that, given a budget of Qqueries,
attains a solution whose expected number of disagreements is at
most 3·OPT+O(n3
Q), where OPT is the optimal cost for the instance.
Its running time is O(Q), and can be easily made non-adaptive
(meaning it can specify all its queries at the outset and make them
in parallel) with the same guarantees. Up to constant factors, our
algorithm yields a provably optimal trade-off between the number
of queries Qand the worst-case error attained, even for adaptive
algorithms.
Finally, we perform an experimental study of our proposed
method on both synthetic and real data, showing the scalability
and the accuracy of our algorithm.
CCS CONCEPTS
•Theory of computation →Graph algorithms analysis ;Fa-
cility location and clustering ;Active learning ;
KEYWORDS
correlation clustering, active learning, query complexity, algorithm
design
ACM Reference Format:
David García–Soriano, Konstantin Kutzkov, Francesco Bonchi, and Char-
alampos Tsourakakis. 2020. Query-Efficient Correlation Clustering. In Pro-
ceedings of The Web Conference 2020 (WWW ’20), April 20–24, 2020, Taipei,
Taiwan. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3366423.
3380220
This paper is published under the Creative Commons Attribution 4.0 International
(CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their
personal and corporate Web sites with the appropriate attribution.
WWW ’20, April 20–24, 2020, Taipei, Taiwan
©2020 IW3C2 (International World Wide Web Conference Committee), published
under Creative Commons CC-BY 4.0 License.
ACM ISBN 978-1-4503-7023-3/20/04.
https://doi.org/10.1145/3366423.33802201 INTRODUCTION
Correlation clustering [3] (or cluster editing ) is a prominent cluster-
ing framework where we are given a set V=[n]and a symmetric
pairwise similarity function sim: V
2→{0,1}, where V
2is the
set of unordered pairs of elements of V. The goal is to cluster the
items in such a way that, to the best possible extent, similar ob-
jects are put in the same cluster and dissimilar objects are put in
different clusters. Assuming that cluster identifiers are represented
by natural numbers, a clustering ℓis a function ℓ:V→N, and
each cluster is a maximal set of vertices sharing the same label.
Correlation clustering aims at minimizing the following cost:
cost(ℓ)=Õ
(x,y)∈(V
2),
ℓ(x)=ℓ(y)(1−sim(x,y))+Õ
(x,y)∈(V
2),
ℓ(x),ℓ(y)sim(x,y).(1)
The intuition underlying the above problem definition is that
if two objects xandyare dissimilar and are assigned to the same
cluster we should pay a cost of 1, i.e., the amount of their dissimi-
larity. Similarly, if x,yare similar and they are assigned to different
clusters we should pay also cost 1, i.e., the amount of their similarity
sim(x,y). The correlation clustering framework naturally extends
to non-binary, symmetric function, i.e., sim: V
2→[0,1]. In this
paper we focus on the binary case; the general non-binary case
can be efficiently reduced to this case at a loss of only a constant
factor in the approximation [ 3, Thm. 23]. The binary setting can
be viewed very conveniently through graph-theoretic lenses: the n
items correspond to the vertices of a similarity graph G, which is a
complete undirected graph with edges labeled “+” or “-”. An edge e
causes a disagreement (ofcost1) between the similarity graph and
a clustering when it is a “+” edge connecting vertices in different
clusters, or a “–” edge connecting vertices within the same cluster. If
we were given a cluster graph [22], i.e., a graph whose set of positive
edges is the union of vertex-disjoint cliques, we would be able to
produce a perfect (i.e., cost 0) clustering simply by computing the
connected components of the positive graph. However, similarities
will generally be inconsistent with one another, so incurring a cer-
tain cost is unavoidable. Correlation clustering aims at minimizing
such cost. The problem may be viewed as the task of finding the
equivalence relation that most closely resembles a given symmetric
relation. The correlation clustering problem is NP-hard [3, 22].arXiv:2002.11557v1 [cs.DS] 26 Feb 2020 |
10.1093.gbe.evad084.pdf | Unsupervised Deep Learning Can Identify Protein
Functional Groups from Unaligned Sequences
Kyle T. David
1,* and Kenneth M. Halanych
2
1Department of Biological Sciences, Auburn University, Auburn, Alabama, USA
2Center for Marine Sciences, University of North Carolina Wilmington, Wilmington, North Carolina, USA
*Corresponding author: E-mail: kzd0038@auburn.edu .
Accepted: 13 May 2023
Abstract
Interpreting protein function from sequence data is a fundamental goal of bioinformatics. However, our current understand -
ing of protein diversity is bottlenecked by the fact that most proteins have only been functionally validated in model organ -
isms, limiting our understanding of how function varies with gene sequence diversity. Thus, accuracy of inferences in clades
without model representatives is questionable. Unsupervised learning may help to ameliorate this bias by identifying highly
complex patterns and structure from large data sets without external labels. Here, we present DeepSeqProt, an unsupervised
deep learning program for exploring large protein sequence data sets. DeepSeqProt is a clustering tool capable of distinguish -
ing between broad classes of proteins while learning local and global structure of functional space. DeepSeqProt is capable of
learning salient biological features from unaligned, unannotated sequences. DeepSeqProt is more likely to capture complete
protein families and statistically significant shared ontologies within proteomes than other clustering methods. We hope this
framework will prove of use to researchers and provide a preliminary step in further developing unsupervised deep learning in
molecular biology.
Key words: machine learning, protein annotation, bioinformatics.
Introduction
As sequencing technology continues to improve, there is an
ever-increasing need to adequately annotate and charac -
terize novel protein sequences and their predicted func-
tions. With thousands of new sequences being uploaded
every day, predicting the function of every protein directly with conventional experimental studies such as gene
knockouts or assays is not possible. Thus, attempting to in-
fer protein function automatically is necessary. Many such
methods exist but fundamentally operate the same way:
by matching the sequence of a protein with unknown func-
tion to a reference sequence of a protein with known func-
tion and then assuming that functions are the same. These Significance
In this manuscript, we report the results of a new unsupervised machine learning software, DeepSeqProt. Unsupervised
methods offer several advantages which can help escape longstanding pitfalls and biases pervading computational mo-
lecular biology. DeepSeqProt learns from and processes unaligned protein sequences with the goal of clustering them
into informative groups with regard to protein family and function, as well as distributing the clusters themselves in a
lower dimension space. We discovered that unsupervised deep learning is capable of recognizing patterns shared
among proteins of similar families and functional affinities, exceeding conventional sequence similarity-based clustering
in some scenarios. DeepSeqProt has broad applications for computational molecular biology and may be especially use-
ful for nonmodel organisms.
© The Author(s) 2023. Published by Oxford University Press on behalf of Society for Molecular Biology and Evolution.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/ ), which permits
non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.comGBE
Genome Biol. Evol. 15(5) https://doi.org/10.1093/gbe/evad084 Advance Access publication 22 May 2023 1Downloaded from https://academic.oup.com/gbe/article/15/5/evad084/7175204 by guest on 19 February 2024
|
10.1038.s41467-024-46631-y.pdf | Article https://doi.org/10.1038/s41467-024-46631-y
Alignment of brain embeddings and arti ficial
contextual embeddings in natural languagepoints to common geometric patterns
Ariel Goldstein1,2, Avigail Grinstein-Dabush2,8, Mariano Schain2,8,
Haocheng Wang3, Zhuoqiao Hong3, Bobbi Aubrey3,4, Mariano Schain2,
Samuel A. Nastase3,Z a i dZ a d a3,E r i cH a m3, Amir Feder2,
Harshvardhan Gazula3, Eliav Buchnik2, Werner Doyle4,S a s h aD e v o r e4,
Patricia Dugan4, Roi Reichart5,D a n i e lF r i e d m a n4, Michael Brenner2,6,
Avinatan Hassidim2, Orrin Devinsky4, Adeen Flinker4,7&U r iH a s s o n2,3
Contextual embeddings, derived from deep language models (DLMs), provide
a continuous vectorial representati on of language. This embedding space
differs fundamentally from the symbolic representations posited by tradi-tional psycholinguistics. We hypoth esize that language areas in the human
brain, similar to DLMs, rely on a con tinuous embedding space to represent
language. To test this hypothesis, we densely record the neural activity pat-terns in the inferior frontal gyrus ( IFG) of three participants using dense
intracranial arrays while they listen ed to a 30-minute podcast. From these fine-
grained spatiotemporal neural recordi ngs, we derive a continuous vectorial
representation for each word (i.e., a br a i ne m b e d d i n g )i ne a c hp a t i e n t .U s i n g
stringent zero-shot mapping we demonstrate that brain embeddings in the IFGand the DLM contextual embedding sp ace have common geometric patterns.
The common geometric patterns allow us to predict the brain embedding in
IFG of a given left-out word based sole ly on its geometrical relationship to
other non-overlapping words in the podcast. Furthermore, we show that
contextual embeddings ca pture the geometry of IFG embeddings better than
static word embeddings. The continu ous brain embedding space exposes a
vector-based neural code for natural la nguage processing in the human brain.
Deep language models (DLMs) trained on massive corpora of natural
text provide a radically different framework for how language isrepresented in the brain. The recent success of DLMs in modelingnatural language can be traced to the gradual development of threefoundational ideas in computational linguistics.Thefirst key innovation was to (1) embed words in continuous
vector space: Traditionally, words in language were viewed as discretesymbolic units in a lexicon
1,2. Early work in distributional semantics
demonstrated that the meaning of words could instead be capturedby geometric relationships in a continuous vector space based onReceived: 24 July 2022
Accepted: 4 March 2024
Check for updates
1Business School, Data Science department and Cognitive Department, Hebrew University, Jerusalem, Israel.2Google Research, Tel Aviv, Israel.3Department
of Psychology and the Neuroscience Institute, Princeton University, Princeton, NJ, USA.4New York University Grossman School of Medicine, New York, NY,
USA.5Faculty of Industrial Engineering and Management, Technion, Israel Institute of Technology, Haifa, Israel.6School of Engineering and Applied Science,
Harvard University, Cambridge, MA, USA.7New York University Tandon School of Engineering, Brooklyn, NY, USA.8These authors contributed equally: Avigail
Grinstein-Dabush, Mariano Schain. e-mail: ariel.y.goldstein@mail.huji.ac.il
Nature Communications | (2024) 15:2768 11234567890():,;
1234567890():,; |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 36