filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
2203.14263.pdf | 1
A General Survey on Attention Mechanisms in
Deep Learning
Gianni Brauwers and Flavius Frasincar
Abstract —Attention is an important mechanism that can be employed for a variety of deep learning models across many different
domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The
various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a
comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed,
and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in
the field of attention models is considered.
Index Terms —Attention models, deep learning, introductory and survey, neural nets, supervised learning
!
1 I NTRODUCTION
THEidea of mimicking human attention first arose in the
field of computer vision [1], [2] in an attempt to reduce
the computational complexity of image processing while
improving performance by introducing a model that would
only focus on specific regions of images instead of the entire
picture. Although, the true starting point of the attention
mechanisms we know today is often attributed to originate
in the field of natural language processing [3]. Bahdanau et
al. [3] implement attention in a machine translation model to
address certain issues with the structure of recurrent neural
networks. After Bahdanau et al. [3] emphasized the advan-
tages of attention, the attention techniques were refined [4]
and quickly became popular for a variety of tasks, such as
text classification [5], [6], image captioning [7], [8], sentiment
analysis [6], [9], and speech recognition [10], [11], [12].
Attention has become a popular technique in deep learn-
ing for several reasons. Firstly, models that incorporate
attention mechanisms attain state-of-the-art results for all
of the previously mentioned tasks, and many others. Fur-
thermore, most attention mechanisms can be trained jointly
with a base model, such as a recurrent neural network or
a convolutional neural network using regular backpropa-
gation [3]. Additionally, attention introduces a certain type
of interpretation into neural network models [8] that are
generally known to be highly complicated to interpret.
Moreover, the popularity of attention mechanisms was ad-
ditionally boosted after the introduction of the Transformer
model [13] that further proved how effective attention can
be. Attention was originally introduced as an extension to
recurrent neural networks [14]. However, the Transformer
model proposed in [13] poses a major development in at-
tention research as it demonstrates that the attention mech-
anism is sufficient to build a state-of-the-art model. This
means that disadvantages, such as the fact that recurrent
neural networks are particularly difficult to parallelize, can
G. Brauwers and F. Frasincar are with the Erasmus School of Economics,
Erasmus University Rotterdam, 3000 DR, Rotterdam, the Netherlands (e-
mail:{frasincar, brauwers}@ese.eur.nl).
Manuscript received July 6, 2020; revised June 21, 2021; Corresponding
author: F. Frasincarbe circumvented. As was the case for the introduction
of the original attention mechanism [3], the Transformer
model was created for machine translation, but was quickly
adopted to be used for other tasks, such as image processing
[15], video processing [16], and recommender systems [17].
The purpose of this survey is to explain the general
form of attention, and provide a comprehensive overview
of attention techniques in deep learning. Other surveys have
already been published on the subject of attention models.
For example, in [18], a survey is presented on attention in
computer vision, [19] provides an overview of attention in
graph models, and [20], [21], [22] are all surveys on attention
in natural language processing. This paper partly builds
on the information presented in the previously mentioned
surveys. Yet, we provide our own significant contributions.
The main difference between this survey and the previously
mentioned ones is that the other surveys generally focus
on attention models within a certain domain. This survey,
however, provides a cross-domain overview of attention
techniques. We discuss the attention techniques in a general
way, allowing them to be understood and applied in a
variety of domains. Furthermore, we found the taxonomies
presented in previous surveys to be lacking the depth and
structure needed to properly distinguish the various atten-
tion mechanisms. Additionally, certain significant attention
techniques have not yet been properly discussed in previ-
ous surveys, while other presented attention mechanisms
seem to be lacking either technical details or intuitive ex-
planations. Therefore, in this paper, we present important
attention techniques by means of a single framework using
a uniform notation, a combination of both technical and in-
tuitive explanations for each presented attention technique,
and a comprehensive taxonomy of attention mechanisms.
The structure of this paper is as follows. Section 2 in-
troduces a general attention model that provides the reader
with a basic understanding of the properties of attention
and how it can be applied. One of the main contributions
of this paper is the taxonomy of attention techniques pre-
sented in Section 3. In this section, attention mechanisms
are explained and categorized according to the presentedarXiv:2203.14263v1 [cs.LG] 27 Mar 2022 |
2210.00312.pdf | Published as a conference paper at ICLR 2023
MULTIMODAL ANALOGICAL REASONING OVER
KNOWLEDGE GRAPHS
Ningyu Zhang1∗Lei Li1∗Xiang Chen1∗Xiaozhuan Liang1Shumin Deng2Huajun Chen1†
1Zhejiang University, AZFT Joint Lab for Knowledge Engine
2National University of Singapore
{zhangningyu,leili21,xiang chen,liangxiaozhuan,231sm,huajunsir }@zju.edu.cn
ABSTRACT
Analogical reasoning is fundamental to human cognition and holds an important
place in various fields. However, previous studies mainly focus on single-modal
analogical reasoning and ignore taking advantage of structure knowledge. No-
tably, the research in cognitive psychology has demonstrated that information
from multimodal sources always brings more powerful cognitive transfer than
single modality sources. To this end, we introduce the new task of multimodal
analogical reasoning over knowledge graphs, which requires multimodal reason-
ing ability with the help of background knowledge. Specifically, we construct
aMultimodal Analogical Reasoning data Set (MARS ) and a multimodal knowl-
edge graph MarKG . We evaluate with multimodal knowledge graph embedding
and pre-trained Transformer baselines, illustrating the potential challenges of the
proposed task. We further propose a novel model-agnostic Multimodal analogical
reasoning framework with Transformer ( MarT ) motivated by the structure map-
ping theory, which can obtain better performance. We hope our work can deliver
benefits and inspire future research1.
1 I NTRODUCTION
Analogical reasoning – the ability to perceive and use relational similarity between two situations
or events – holds an important place in human cognition (Johnson-Laird, 2006; Wu et al., 2020;
Bengio et al., 2021; Chen et al., 2022a) and can provide back-end support for various fields such
as education (Thagard, 1992), creativity (Goel, 1997), thus appealing to the AI community. Early,
Mikolov et al. (2013b); Gladkova et al. (2016a); Ethayarajh et al. (2019a) propose visual analogical
reasoning aiming at lifting machine intelligence in Computer Vision (CV) by associating vision
with relational, structural, and analogical reasoning. Meanwhile, researchers of Natural Language
Processing (NLP) hold the connectionist assumption (Gentner, 1983) of linear analogy (Ethayarajh
et al., 2019b); for example, the relation between two words can be inferred through vector arithmetic
of word embeddings. However, it is still an open question whether artificial neural networks are also
capable of recognizing analogies among different modalities.
Note that humans can quickly acquire new abilities based on finding a common relational system
between two exemplars, situations, or domains. Based on Mayer’s Cognitive Theory of multimedia
learning (Hegarty & Just, 1993; Mayer, 2002), human learners often perform better on tests with
analogy when they have learned from multimodal sources than single-modal sources. Evolving
from recognizing single-modal analogies to exploring multimodal reasoning for neural models, we
emphasize the importance of a new kind of analogical reasoning task with Knowledge Graphs (KGs).
In this paper, we introduce the task of multimodal analogical reasoning over knowledge graphs to fill
this blank. Unlike the previous multiple-choice QA setting, we directly predict the analogical target
and formulate the task as link prediction without explicitly providing relations . Specifically, the
task can be formalized as (eh,et) : (eq,?)with the help of background multimodal knowledge graph
∗Equal contribution and shared co-first authorship.
†Corresponding author.
1Code and datasets are available in https://github.com/zjunlp/MKG_Analogy .
1arXiv:2210.00312v4 [cs.CL] 1 Mar 2023 |
2310.12397.pdf | GPT-4 Doesn’t Know It’s Wrong: An Analysis of
Iterative Prompting for Reasoning Problems
Kaya Stechly∗Matthew Marquez∗Subbarao Kambhampati∗
Abstract
There has been considerable divergence of opinion on the reasoning abilities
of Large Language Models (LLMs). While the initial optimism that reasoning
might emerge automatically with scale has been tempered thanks to a slew of
counterexamples–ranging from multiplication to simple planning, there is still the
wide spread belief that LLMs can self-critique and improve their own solutions in
an iterative fashion. This belief seemingly rests on the assumption that verification
of correctness should be easier than generation–a rather classical argument from
computational complexity, that should be irrelevant to LLMs to the extent what
they are doing is approximate retrieval. In this paper, we set out to systematically
investigate the effectiveness of iterative prompting of LLMs in the context of Graph
Coloring , a canonical NP-complete reasoning problem that is related to proposi-
tional satisfiability as well as practical problems like scheduling and allocation.
We present a principled empirical study of the performance of GPT4 in solving
graph coloring instances or verifying the correctness of candidate colorings–both in
direct and iterative modes. In iterative modes, we experiment both with the model
critiquing its own answers and an external correct reasoner verifying proposed
solutions. In both cases, we analyze whether the content of the criticisms actually
affects bottom line performance. The study seems to indicate that (i) LLMs are bad
at solving graph coloring instances (ii) they are no better at verifying a solution–and
thus are not effective in iterative modes with LLMs critiquing LLM-generated
solutions (iii) the correctness and content of the criticisms–whether by LLMs or
external solvers–seems largely irrelevant to the performance of iterative prompting.
We show that the observed effectiveness of LLMs in iterative settings is largely due
to the correct solution being fortuitously present in the top-k completions of the
prompt (and being recognized as such by an external verifier). Our results thus call
into question claims about the self-critiquing capabilities of state of the art LLMs.
1 Introduction
Large Language Models (LLMs), essentially n-gram models on steroids which have been trained on
web-scale language corpus, have caught the imagination of the AI research community with linguistic
behaviors that no one expected text completion systems to possess. Their seeming versatility has lead
many researchers to wonder whether they can also do well on reasoning tasks typically associated
with system 2 competency. Initial excitement based on anecdotal performance of LLMs on reasoning
tasks has dissipated to some extent by the recent spate of studies questioning the robustness of
such behaviors–be it planning [ 17,8], simple arithmetic and logic [ 5], or general mathematical and
abstract benchmark[ 14,6]. There still exists considerable optimism that even if LLMs can’t generate
correct solutions in one go, their accuracy improves in a iterative prompting regime, where LLMs
will be able to "self-critique" their candidate solutions and refine them to the point of correctness
[20,19,15,18,7]. This belief seem to rest largely on the assumption that verification of correctness
∗Arizona State University, Tempe.
Preprint. Under review.arXiv:2310.12397v1 [cs.AI] 19 Oct 2023 |
2309.14322.pdf | Small-scale proxies for large-scale Transformer training instabilities
Mitchell Wortsman Peter J. Liu Lechao Xiao Katie Everett
Alex Alemi Ben Adlam John D. Co-Reyes Izzeddin Gur Abhishek Kumar
Roman Novak Jeffrey Pennington Jascha Sohl-dickstein Kelvin Xu
Jaehoon Lee*Justin Gilmer*Simon Kornblith*
Google DeepMind
Abstract
Teams that have trained large Transformer-based mod-
els have reported training instabilities at large scale
that did not appear when training with the same
hyperparameters at smaller scales. Although the
causes of such instabilities are of scientific interest,
the amount of resources required to reproduce them
has made investigation difficult. In this work, we
seek ways to reproduce and study training stability
and instability at smaller scales. First, we focus on
two sources of training instability described in pre-
vious work: the growth of logits in attention layers
(Dehghani et al., 2023) and divergence of the output
logits from the log probabilities (Chowdhery et al.,
2022). By measuring the relationship between learn-
ing rate and loss across scales, we show that these
instabilities also appear in small models when training
at high learning rates, and that mitigations previously
employed at large scales are equally effective in this
regime. This prompts us to investigate the extent to
which other known optimizer and model interventions
influence the sensitivity of the final loss to changes
in the learning rate. To this end, we study meth-
ods such as warm-up, weight decay, and the µParam
(Yang et al., 2022), and combine techniques to train
small models that achieve similar losses across orders
of magnitude of learning rate variation. Finally, to
conclude our exploration we study two cases where
instabilities can be predicted before they emerge by
examining the scaling behavior of model activation
and gradient norms.
1 Introduction
Scaling up transformers has led to remarkable progress
from chat models to image generation. However, not
104
103
102
101
100
Learning rate2.502.753.003.253.503.754.004.25Final eval loss
qk-layernorm = True
qk-layernorm = FalseN = 2.4e+06
N = 9.4e+06
N = 1.9e+07
N = 4.2e+07
N = 8.5e+07
N = 1.5e+08
N = 3.0e+08
N = 1.2e+09
107108109
Number of parameters102
101
100LR sensitivityFigure 1: Qk-layernorm [ 11] enables stable training across
three orders of magnitude of learning rate (LR) variation.
(Top) For transformers with Nparameters, we plot the
effect of learning rate on final evaluation loss. (Bottom)
We use LR sensitivity to summarize the top plot. LR sensi-
tivity measures the expected deviation from optimal when
varying learning rate across three orders of magnitude.
Qk-layernorm reduces LR sensitivity, but LR sensitivity
still increases with model scale.
1arXiv:2309.14322v1 [cs.LG] 25 Sep 2023 |
2308.05660.pdf | Thermodynamic Linear Algebra
Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon,
Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles
Normal Computing Corporation, New York, New York, USA
Linear algebraic primitives are at the core of many modern algorithms in engineering, science, and
machine learning. Hence, accelerating these primitives with novel computing hardware would have
tremendous economic impact. Quantum computing has been proposed for this purpose, although
the resource requirements are far beyond current technological capabilities, so this approach remains
long-term in timescale. Here we consider an alternative physics-based computing paradigm based
on classical thermodynamics, to provide a near-term approach to accelerating linear algebra.
At first sight, thermodynamics and linear algebra seem to be unrelated fields. In this work, we
connect solving linear algebra problems to sampling from the thermodynamic equilibrium distri-
bution of a system of coupled harmonic oscillators. We present simple thermodynamic algorithms
for (1) solving linear systems of equations, (2) computing matrix inverses, (3) computing matrix
determinants, and (4) solving Lyapunov equations. Under reasonable assumptions, we rigorously
establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly
in matrix dimension. Our algorithms exploit thermodynamic principles like ergodicity, entropy,
and equilibration, highlighting the deep connection between these two seemingly distinct fields, and
opening up algebraic applications for thermodynamic computing hardware.
I. Introduction
Basic linear algebra primitives such as solving a linear system of the form Ax=band obtaining the
inverse of a matrix are present in many modern algorithms. Such primitives are relevant to a multitude
of applications, including for example optimal control of dynamic systems and resource allocation. They
are also a common subroutine of many artificial intelligence (AI) algorithms, and account for a substantial
portion of the time and energy costs in some cases.
The most common method to perform these primitives is LU decomposition, whose time-complexity
scales as O(d3). Many proposals have been made to accelerate such primitives, for example using iterative
methods such as the conjugate gradient method. In the last decade, these primitives have been accelerated
by hardware improvements, notably by their implementation on graphical processing units (GPUs), fueling
massive parallelization. However, the scaling of these methods is still a prohibitive factor, and obtaining
a good approximate solution to a dense matrix of more than a few tens of thousand dimensions remains
challenging.
Exploiting physics to solve mathematical problems is a deep idea, with much focus on solving optimization
problems [1–3]. In the context of linear algebra, much attention has been paid to quantum computers [4],
since the mathematics of discrete-variable quantum mechanics matches that of linear algebra. A quantum
algorithm [5] to solve linear systems has been proposed, which for sparse and well-conditioned matrices
scales as logd. However, the resource requirements [6] for this algorithm are far beyond current hardware
capabilities. More generally building large-scale quantum hardware has remained difficult [7], and variational
quantum algorithms for linear algebra [8–10] have battled with vanishing gradient issues [11–13].
Therefore, the search for alternative hardware proposals that can exploit physical dynamics to accelerate
linear algebra primitives has been ongoing. Notably, memristor crossbar arrays have been of interest for
accelerating matrix-vector multiplications [14, 15]. Solving linear systems has also been the subject of
analog computing approaches [16].
Recently, we defined a new class of hardware, built from stochastic, analog building blocks, which is
ultimately thermodynamic in nature [17]. (See also probabilistic-bit computers [18–20] and thermodynamic
neural networks [21–24] for alternative approaches to thermodynamic computing [25]). AI applications like
generative modeling are a natural fit for this thermodynamic hardware, where stochastic fluctuations are
exploited to generate novel samples.
In this work, we surprisingly show that the same thermodynamic hardware from Ref. [17] can also be used
toacceleratekeyprimitivesinlinearalgebra. Thermodynamicsisnottypicallyassociatedwithlinearalgebra,
and connecting these two fields is therefore non-trivial. Here, we exploit the fact that the mathematics of
harmonic oscillator systems is inherently affine (i.e., linear), and hence we can map linear algebraic primitives
onto such systems. (See also Ref. [26] for a discussion of harmonic oscillators in the context of quantum
computingspeedups.) Weshowthatsimplybysamplingfromthethermalequilibriumdistributionofcoupled
harmonic oscillators, one can solve a variety of linear algebra problems.arXiv:2308.05660v1 [cond-mat.stat-mech] 10 Aug 2023 |
2309.10150.pdf | Q-Transformer: Scalable Offline Reinforcement
Learning via Autoregressive Q-Functions
Yevgen Chebotar∗, Quan Vuong∗, Alex Irpan, Karol Hausman, Fei Xia, Yao Lu, Aviral Kumar,
Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum,
Sumedh Sontakke, Grecia Salazar, Huong T Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath,
Jaspiar Singht, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, Sergey Levine
Google DeepMind
Abstract: In this work, we present a scalable reinforcement learning method for
training multi-task policies from large offline datasets that can leverage both hu-
man demonstrations and autonomously collected data. Our method uses a Trans-
former to provide a scalable representation for Q-functions trained via offline tem-
poral difference backups. We therefore refer to the method as Q-Transformer.
By discretizing each action dimension and representing the Q-value of each ac-
tion dimension as separate tokens, we can apply effective high-capacity sequence
modeling techniques for Q-learning. We present several design decisions that en-
able good performance with offline RL training, and show that Q-Transformer
outperforms prior offline RL algorithms and imitation learning techniques on a
large diverse real-world robotic manipulation task suite. The project’s website
and videos can be found at qtransformer.github.io
1 Introduction
Human demonstrationsAutonomousdata
Conservative regularizationAutoregressive Q-learningMonte-Carlo returnsMixed quality data
environment stepaction dimension
……Q-values per action dimensionQ-Transformer
Figure 1: Q-Transformer enables training high-
capacity sequential architectures on mixed qual-
ity data. Our policies are able to improve upon
human demonstrations and execute a variety of
manipulation tasks in the real world.Robotic learning methods that incorporate large
and diverse datasets in combination with high-
capacity expressive models, such as Transform-
ers [1, 2, 3, 4, 5, 6], have the potential to acquire
generalizable and broadly applicable policies that
perform well on a wide variety of tasks [1, 2].
For example, these policies can follow natural
language instructions [4, 7], perform multi-stage
behaviors [8, 9], and generalize broadly across
environments, objects, and even robot morpholo-
gies [10, 3]. However, many of the recently pro-
posed high-capacity models in the robotic learn-
ing literature are trained with supervised learn-
ing methods. As such, the performance of the re-
sulting policy is limited by the degree to which
human demonstrators can provide high-quality
demonstration data. This is limiting for two rea-
sons. First, we would like robotic systems that
aremore proficient than human teleoperators, ex-
ploiting the full potential of the hardware to per-
form tasks quickly, fluently, and reliably. Second,
we would like robotic systems that get better with
autonomously gathered experience, rather than
relying entirely on high-quality demonstrations.
Reinforcement learning in principle provides
both of these capabilities. A number of promising recent advances demonstrate the successes of
large-scale robotic RL in varied settings, such as robotic grasping and stacking [11, 12], learning
heterogeneous tasks with human-specified rewards [13], learning multi-task policies [14, 15], learn-
ing goal-conditioned policies [16, 17, 18, 19], and robotic navigation [20, 21, 22, 23, 24]. However,
∗Equal contribution.
Corresponding emails: chebotar@google.com, quanhovuong@google.com .
7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.arXiv:2309.10150v2 [cs.RO] 17 Oct 2023 |
2109.01652.pdf | Published as a conference paper at ICLR 2022
FINETUNED LANGUAGE MODELS AREZERO-SHOT
LEARNERS
Jason Wei∗, Maarten Bosma∗, Vincent Y. Zhao∗, Kelvin Guu∗, Adams Wei Yu,
Brian Lester, Nan Du, Andrew M. Dai, and Quoc V . Le
Google Research
ABSTRACT
This paper explores a simple method for improving the zero-shot learning abilities
of language models. We show that instruction tuning —finetuning language models
on a collection of datasets described via instructions—substantially improves zero-
shot performance on unseen tasks.
We take a 137B parameter pretrained language model and instruction tune it on
over 60 NLP datasets verbalized via natural language instruction templates. We
evaluate this instruction-tuned model, which we call FLAN, on unseen task types.
FLAN substantially improves the performance of its unmodified counterpart and
surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even
outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC,
OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning
datasets, model scale, and natural language instructions are key to the success of
instruction tuning.
TargetInput (Commonsense Reasoning)
keep stack of pillow cases in fridgeInference on unseen task typeFinetune on many tasks (“instruction-tuning”)
…Translate this sentence to Spanish: The new office building was built in less than three months.Input (Translation)
El nuevo edificio de oficinas se construyó en tres meses.TargetInput (Natural Language Inference)
It is not possible to tellFLAN ResponseCoreference resolution tasksSentiment analysis tasksGPT-3 175B zero shotGPT-3 175B few-shotFLAN 137B zero-shotPerformance on unseen task typesNatural language inference42.953.256.2Reading Comprehension63.772.677.4Closed-Book QA49.855.756.6Here is a goal: Get a cool sleep on summer days. How would you accomplish this goal? OPTIONS: -Keep stack of pillow cases in fridge. -Keep stack of pillow cases in oven.Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis? OPTIONS: -yes -it is not possible to tell -no
Figure 1: Top: overview of instruction tuning and FLAN. Instruction tuning finetunes a pretrained
language model on a mixture of tasks phrased as instructions. At inference time, we evaluate on
an unseen task type; for instance, we could evaluate the model on natural language inference (NLI)
when no NLI tasks were seen during instruction tuning. Bottom: performance of zero-shot FLAN,
compared with zero-shot and few-shot GPT-3, on three unseen task types where instruction tuning
improved performance substantially out of ten we evaluate. NLI datasets: ANLI R1–R3, CB, RTE.
Reading comprehension datasets: BoolQ, MultiRC, OBQA. Closed-book QA datasets: ARC-easy,
ARC-challenge, NQ, TriviaQA.
∗Lead contributors. Author contributions listed at end of paper.
1arXiv:2109.01652v5 [cs.CL] 8 Feb 2022 |
1610.06258.pdf | Using Fast Weights to Attend to the Recent Past
Jimmy Ba
University of Toronto
jimmy@psi.toronto.eduGeoffrey Hinton
University of Toronto and Google Brain
geoffhinton@google.com
Volodymyr Mnih
Google DeepMind
vmnih@google.comJoel Z. Leibo
Google DeepMind
jzl@google.comCatalin Ionescu
Google DeepMind
cdi@google.com
Abstract
Until recently, research on artificial neural networks was largely restricted to sys-
tems with only two types of variable: Neural activities that represent the current
or recent input and weights that learn to capture regularities among inputs, outputs
and payoffs. There is no good reason for this restriction. Synapses have dynam-
ics at many different time-scales and this suggests that artificial neural networks
might benefit from variables that change slower than activities but much faster
than the standard weights. These “fast weights” can be used to store temporary
memories of the recent past and they provide a neurally plausible way of imple-
menting the type of attention to the past that has recently proved very helpful in
sequence-to-sequence models. By using fast weights we can avoid the need to
store copies of neural activity patterns.
1 Introduction
Ordinary recurrent neural networks typically have two types of memory that have very different time
scales, very different capacities and very different computational roles. The history of the sequence
currently being processed is stored in the hidden activity vector, which acts as a short-term memory
that is updated at every time step. The capacity of this memory is O(H)whereHis the number
of hidden units. Long-term memory about how to convert the current input and hidden vectors into
the next hidden vector and a predicted output vector is stored in the weight matrices connecting the
hidden units to themselves and to the inputs and outputs. These matrices are typically updated at the
end of a sequence and their capacity is O(H2) + O(IH) + O(HO)whereIandOare the numbers
of input and output units.
Long short-term memory networks [Hochreiter and Schmidhuber, 1997] are a more complicated
type of RNN that work better for discovering long-range structure in sequences for two main reasons:
First, they compute increments to the hidden activity vector at each time step rather than recomputing
the full vector1. This encourages information in the hidden states to persist for much longer. Second,
they allow the hidden activities to determine the states of gates that scale the effects of the weights.
These multiplicative interactions allow the effective weights to be dynamically adjusted by the input
or hidden activities via the gates. However, LSTMs are still limited to a short-term memory capacity
ofO(H)for the history of the current sequence.
Until recently, there was surprisingly little practical investigation of other forms of memory in recur-
rent nets despite strong psychological evidence that it exists and obvious computational reasons why
it was needed. There were occasional suggestions that neural networks could benefit from a third
form of memory that has much higher storage capacity than the neural activities but much faster
dynamics than the standard slow weights. This memory could store information specific to the his-
tory of the current sequence so that this information is available to influence the ongoing processing
1This assumes the “remember gates ” of the LSTM memory cells are set to one.arXiv:1610.06258v3 [stat.ML] 5 Dec 2016 |
sciadv.adn0042.pdf | Hikichi et al., Sci. Adv. 10, eadn0042 (2024) 1 March 2024
Science Adv AnceS | ReSeAR cH AR ticle
1 of 20VIROLOGY
Epistatic pathways can drive HIV- 1 escape from
integrase strand transfer inhibitors
Yuta Hikichi1, Jonathan R. Grover2, Alicia Schäfer2, Walther Mothes2, Eric O. Freed1*
People living with human immunodeficiency virus (HIV) receiving integrase strand transfer inhibitors (INSTIs)
have been reported to experience virological failure in the absence of resistance mutations in integrase. To elucidate
INSTI resistance mechanisms, we propagated HIV- 1 in the presence of escalating concentrations of the INSTI
dolutegravir. HIV- 1 became resistant to dolutegravir by sequentially acquiring mutations in the envelope glyco -
protein (Env) and the nucleocapsid protein. The selected Env mutations enhance the ability of the virus to spread
via cell- cell transfer, thereby increasing the multiplicity of infection (MOI). While the selected Env mutations confer
broad resistance to multiple classes of antiretrovirals, the fold resistance is ~2 logs higher for INSTIs than for other
classes of drugs. We demonstrate that INSTIs are more readily overwhelmed by high MOI than other classes of
antiretrovirals. Our findings advance the understanding of how HIV- 1 can evolve resistance to antiretrovirals,
including the potent INSTIs, in the absence of drug- target gene mutations.
INTRODUCTION
Six classes of antiretrovirals (ARVs) have been approved for clinical
use by the US Food and Drug Administration: nucleoside reverse
transcriptase (RT) inhibitors (NRTIs), nonnucleoside RT inhibitors
(NNRTIs), integrase strand transfer inhibitors (INSTIs), protease
inhibitors (PIs), entry inhibitors, and a recently approved capsid
inhibitor, lenacapavir (LEN) (1 , 2). Combination antiretroviral therapy
(cART) has markedly reduced human immunodeficiency virus
(HIV)–associated morbidity and mortality. However, resistance to
ARVs does arise in some people living with HIV (PLWH), often
associated with poor adherence, use of suboptimal drug regimens,
and/or lack of viral load monitoring, particularly in poorly re-
sourced areas (3). In most cases, drug resistance is caused by muta-
tions in the genes targeted by the drugs, often by interfering with the
interaction between the drug and the viral target (3). Thus, in the
clinical setting, drug resistance monitoring is largely focused on
drug- target genes. Recently approved ARVs have been developed
with the aim of overcoming resistant variants observed in the clinic.
For example, second- generation INSTIs, such as dolutegravir (DTG)
and bictegravir (BIC), show some efficacy against IN mutants that
are resistant to first- generation INSTIs like raltegravir (RAL) (4).
These second- generation INSTIs also exhibit higher genetic barriers
to resistance compared to the first- generation INSTIs and RT in-
hibitors ( 5). At present, regimens containing DTG are therefore rec-
ommended as the preferred first- line regimen for most PLWH (6).
Retroviral integration requires two enzymatic reactions catalyzed
by IN: 3′ - end processing, during which the enzyme cleaves two
nucleotides from the 3 ′ ends of the newly synthesized linear viral
DNA, and DNA strand transfer, which entails the insertion of the
viral DNA ends into host cell target DNA. The integration reaction
takes place in a macromolecular complex known as the intasome,
which comprises an IN multimer and the two viral DNA ends (4).
INSTIs inhibit the strand transfer reaction by binding IN and the
viral DNA ends in the intasome and chelating the Mg++ ions required for IN catalytic activity (4 ). Five INSTIs are currently
approved for clinical use: two “first- generation” INSTIs, RAL and
elvitegravir (EVG), and three “second- generation” INSTIs, DTG,
BIC, and cabotegravir (CAB).
Despite the predominant role of drug- target gene mutations in
HIV- 1 drug resistance, mutations outside drug- target genes can
contribute to drug resistance. Particularly in the case of PIs and
INSTIs, some PLWH experience virological failure in the absence
of mutations in the target genes (7 –11). Mutations in Gag and
the envelope glycoprotein (Env) have been implicated in PI resist-
ance ( 12, 13). In vitro studies have reported that mutations in the
3′polypurine tract (3′ PPT) reduce the susceptibility of HIV- 1 to
INSTIs (14–16). 3′PPT mutations may lead to the accumulation of
unintegrated 1- LTR circles that can support the expression of viral
proteins (14, 16) particularly in cell lines that express HTLV- 1 Tax
(14). Wijting et al . (11) reported a distinct set of mutations in the
3′PPT from a patient failing DTG monotherapy in the absence of
INSTI resistance mutations in IN. However, in other studies, these
in vivo–derived 3′ PPT mutations were found not to confer resistance
to INSTIs in vitro (17). It is therefore still unclear whether, or to
what extent, 3′PPT mutations contribute to INSTI resistance in vivo.
Nevertheless, as more potent inhibitors with higher genetic barriers
to resistance are developed, unconventional drug resistance pathways
will become important to consider.
The Env glycoproteins play a central role in HIV- 1 entry and
immune evasion. Env exists as a metastable trimer of three pro-
tomers comprising gp120 and gp41 heterodimers on the surface of
the virion and the infected cell. The binding of gp120 to CD4 on the
target cell triggers conformational rearrangement of the Env trimer
that exposes coreceptor (CCR5 or CXCR4) binding sites in gp120.
Subsequent binding of gp120 to coreceptor promotes insertion of
the gp41 fusion peptide into the target cell membrane, and the
refolding of gp41 heptad repeat 1 and 2 (HR1 and HR2) mediates
the fusion of viral and cellular membranes, allowing viral entry into
the cytosol of the target cell (18). Single- molecule Förster resonance
energy transfer (smFRET) analysis has demonstrated that the Env
trimer spontaneously transitions between at least three distinct pre-
fusion conformations: state 1 (pretriggered, closed conformation),
state 2 (necessary, intermediate conformation), and state 3 (fully 1virus- cell interaction Section, Hiv dynamics and Replication Program, center for
cancer Research, national c ancer i nstitute, Frederick, Md , USA. 2department of
Microbial Pathogenesis, Yale University School of Medicine, new Haven, ct , USA.
*corresponding author. email: efreed@ mail. nih. govcopyright © 2024 the
Authors, some rights
reserved; exclusive
licensee American
Association for the
Advancement of
Science. no claim to
original U.S.
Government Works.
distributed under a
creative c ommons
Attribution
nonc ommercial
license 4.0 ( cc BY- nc ).
Downloaded from https://www.science.org on March 26, 2024
|
10.1016.j.cell.2023.12.034.pdf | Leading Edge
Commentary
Enabling structure-based drug discovery
utilizing predicted models
Edward B. Miller,1,*Howook Hwang,1Mee Shelley,2Andrew Placzek,2Joa˜o P.G.L.M. Rodrigues,1Robert K. Suto,3
Lingle Wang,1Karen Akinsanya,1and Robert Abel1
1Schro ¨dinger New York, 1540 Broadway, 24th Floor, New York, NY 10036, USA
2Schro ¨dinger Portland, 101 SW Main Street, Suite 1300, Portland, OR 97204, USA
3Schro ¨dinger Framingham, 200 Staples Drive, Suite 210, Framingham, MA 01702, USA
*Correspondence: ed.miller@schrodinger.com
https://doi.org/10.1016/j.cell.2023.12.034
High-quality predicted structures enable structure-based approaches to an expanding number of drug dis-
covery programs. We propose that by utilizing free energy perturbation (FEP), predicted structures can be
confidently employed to achieve drug design goals. We use structure-based modeling of hERG inhibition
to illustrate this value of FEP.
Introduction
Traditional structure-based drug design
offers a rational basis to guide the discov-ery of novel chemical matter. Combined
with the apparent success of structure-
prediction methodology (AlphaFold, Ro-seTTAFold, et al.), the domain of applica-
bility of structure-based drug design
would, at first glance, appear to havedramatically increased due to the suddenavailability of seemingly high-fidelity pre-
dicted structures for any protein seq-
uence. However, preliminary evidencesuggests that AlphaFold struggles to reli-
ably generate experimentally observed
alternative protein conformations.
1Cru-
cially, the utility of these predicted struc-
tures for atomistic modeling and drug
design must be scrutinized before theycan be deployed in lieu of experimental
structures.
The most direct measurement of a pre-
dicted structure’s accuracy is how well itmatches a later solved experimental stru-
cture. This metric is crucial for assessing
the performance of structure predictionmethods, but within the realm of drug dis-
covery, the relevance and value of pre-
dicted protein structure models is directlyrelated to their impact on drug design out-
comes. Multiple atomic resolution struc-
tures, both predicted and experimental,can be used to rationally optimize molec-ular properties, such as on-target po-
tency, off-target potency, and absorption,
distribution, metabolism, excretion, andtoxicity (ADMET) properties. In this Com-
mentary, we explore how predicted struc-tures can be confidently applied to these
drug design challenges. We focus on
free energy perturbation, a computationalassay, to quantify the accuracy of pre-
dicted structures for these purposes.
Motivations for structure prediction
A structure is most useful when it is of the
protein target in the therapeutically rele-
vant state. The challenge with structure-based drug design is being able to obtain
the right structure in the disease-relevant
state bound with project chemical matter.As an example, we point to the experi-
mental structural biology pursuits around
the leucine-rich repeat kinase 2 (LRRK2).Mutants of LRRK2 have been implicated
in Parkinson’s disease. Structures have
been obtained of inactive LRRK2 with-out an inhibitor, as a monomer (PDB:7LHW), and as a dimer (PDB: 7LHT), as
well as the G2019S mutant (PDB: 7LI3).
Later, an active type 1 inhibitor boundstructure was published (PDB: 8TXZ) as
well as an inactive state with a type 2 in-
hibitor (PDB: 8TZE). Functionally, LRRK2is associated with cellular trafficking,
and a structure of microtubule-bound
LRRK2 was also recently published(PDB: 7THY). Generally, the demand fora protein structure in various physiologi-
cally relevant structural and dynamical
states outpaces the supply.
From a structure prediction perspec-
tive, numerous publications have offered
approaches to bias or to explore multiplereceptor states as part of structure pre-
diction.
2,3Under favorable conditions, alimited number of predicted structures
are presented to the chemist, who must
then decide which model or models areworthy of committing resources toward.
This is not a trivial commitment—the
expectation is that a predicted structureshould precede, if not outright replace,
an experimental structure. Therefore, if a
predicted structure is considered accu-rate, it should drive consequential deci-sions, among them which compounds to
pursue for costly synthesis and to provide
a clear, ideally quantitative rationale asto why.
Any predicted structure must be judged
by its fidelity to reality. Rather than focuson measures of the geometric agreement
with some future experimental structure,
we propose here that a more meaningfulquestion is to ask the extent to which
the predicted structure can be used to
model existing structure-activity relation-ships. The expectation is that a modelthat can recapitulate a known structure-
activity relationship (SAR) is qualified to
make predictions for novel compoundsand to drive synthesis of those com-
pounds in response to predicted binding
affinity.
While a large number of methods
ranging from knowledge-based machine
learning to physics-based simulationshave shown promises in predicting pro-tein-ligand binding free energies,
4we
will focus on the application of one of
the most extensively and broadly vali-dated methods, free energy perturbation
(FEP), to evaluate a model’s ability to
ll
Cell 187, February 1, 2024 ª2024 Elsevier Inc. 521 |
1805.02867.pdf | arXiv:1805.02867v2 [cs.PF] 28 Jul 2018Online normalizer calculation for softmax
Maxim Milakov
NVIDIA
mmilakov@nvidia.comNatalia Gimelshein
NVIDIA
ngimelshein@nvidia.com
Abstract
The Softmax function is ubiquitous in machine learning, mul tiple previous works
suggested faster alternatives for it. In this paper we propo se a way to compute
classical Softmax with fewer memory accesses and hypothesi ze that this reduction
in memory accesses should improve Softmax performance on ac tual hardware.
The benchmarks confirm this hypothesis: Softmax accelerate s by up to 1.3x and
Softmax+TopK combined and fused by up to 5x.
1 Introduction
Neural networks models are widely used for language modelin g, for tasks such as machine transla-
tion [1] and speech recognition [2]. These models compute wo rd probabilities taking into account
the already generated part of the sequence. The probabiliti es are usually computed by a Projection
layer, which "projects" hidden representation into the out put vocabulary space, and a following Soft-
max function, which transforms raw logits into the the vecto r of probabilities. Softmax is utilized
not only for neural networks, for example, it is employed in m ultinomial logistic regression [3].
A number of previous works suggested faster alternatives to compute word probabilities. Differenti-
ated Softmax [4] and SVD-Softmax [5] replace the projection layer - which is usually just a matrix
multiplication - with more computationally efficient alter natives. Multiple variants of Hierarchical
Softmax [6, 7, 8] split a single Projection+Softmax pair int o multiple much smaller versions of these
two functions organized in tree-like structures. Sampled- based approximations, such as Importance
Sampling [9], Noise Contrastive Estimation [10], and Black out [11] accelerate training by running
Softmax on select elements of the original vector. Finally, Self-Normalized Softmax [12] augments
the objective function to make the softmax normalization te rm close to 1(and skip computing it
during inference).
This is not an exhaustive list, but, hopefully, a representa tive one. Almost all of the approaches
still need to run the original Softmax function, either on fu ll vector or reduced one. There are
two exceptions that don’t need to compute the softmax normal ization term: training with Noise
Contrastive Estimation and inference with Self-Normalize d Softmax. All others will benefit from
the original Softmax running faster.
To the best of our knowledge there has been no targeted effort s to improve the performance of the
original Softmax function. We tried to address this shortco ming and figured out a way to compute
Softmax with fewer memory accesses. We benchmarked it to see if those reductions in memory
accesses translate into performance improvements on a real hardware.
Preprint. Work in progress. |
10.1101.2024.01.02.573943.pdf | De Novo Atomic Protein Structure Modeling for Cryo-EM
Density Maps Using 3D Transformer and Hidden Markov
Model
Nabin Giri1,2and Jianlin Cheng1,2*
1Electrical Engineering and Computer Science, University of Missouri, Columbia, 65211,
Missouri, USA.
2NextGen Precision Health Institute, University of Missouri, Columbia, 65211, Missouri,
USA.
*Corresponding author(s). E-mail(s): chengji@missouri.edu;
Contributing authors: ngzvh@missouri.edu;
Abstract
Accurately building three-dimensional (3D) atomic structures from 3D cryo-electron microscopy (cryo-
EM) density maps is a crucial step in the cryo-EM-based determination of the structures of protein
complexes. Despite improvements in the resolution of 3D cryo-EM density maps, the de novo con-
version of density maps into 3D atomic structures for protein complexes that do not have accurate
homologous or predicted structures to be used as templates remains a significant challenge. Here,
we introduce Cryo2Struct, a fully automated ab initio cryo-EM structure modeling method that uti-
lizes a 3D transformer to identify atoms and amino acid types in cryo-EM density maps first, and
then employs a novel Hidden Markov Model (HMM) to connect predicted atoms to build backbone
structures of proteins. Tested on a standard test dataset of 128 cryo-EM density maps with varying
resolutions (2.1 - 5.6 ˚A) and different numbers of residues (730 - 8,416), Cryo2Struct built substan-
tially more accurate and complete protein structural models than the widely used ab initio method
- Phenix in terms of multiple evaluation metrics. Moreover, on a new test dataset of 500 recently
released density maps with varying resolutions (1.9 - 4.0 ˚A) and different numbers of residues (234
- 8,828), it built more accurate models than on the standard dataset. And its performance is rather
robust against the change of the resolution of density maps and the size of protein structures.
Keywords: cryo-EM, atomic protein structure modeling, deep learning, transformer, Hidden Markov Model
1 Introduction
Determining the three-dimensional (3D) atomic
structures of macromolecules, such as protein
complexes and assemblies [1–3], is fundamental
in structural biology. The 3D arrangement ofatoms provides essential insights into the mecha-
nistic understanding of molecular function of pro-
teins [4]. In recent years, cryo-electron microscopy
(cryo-EM) [5] has emerged as a key technol-
ogy for experimentally determining the structures
of large protein complexes and assemblies. How-
ever, modeling atomic protein structures from
1. CC-BY 4.0 International license made available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted January 2, 2024. ; https://doi.org/10.1101/2024.01.02.573943doi: bioRxiv preprint |
score-matching-denoising.pdf | 1
A Connection Between Score Matching
and Denoising Autoencoders
Pascal Vincent
vincentp@iro.umontreal.ca
Dept. IRO, Université de Montréal,
CP 6128, Succ. Centre-Ville, Montréal (QC) H3C 3J7, Canada.
Technical Report 1358
Département d’Informatique et de Recherche Opérationnelle
December 2010
THIS IS A PREPRINT VERSION OF A NOTE THAT HAS BEEN
ACCEPTED FOR PUBLICATION IN NEURAL COMPUTATION.
Keywords: autoencoder, energy based models, score matching, denoising, density
estimation.
Abstract
Denoising autoencoders have been previously shown to be competitive alternatives
to Restricted Boltzmann Machines for unsupervised pre-training of each layer of a deep
architecture. We show that a simple denoising autoencoder training criterion is equiv-
alent to matching the score (with respect to the data) of a specific energy based model
to that of a non-parametric Parzen density estimator of the data. This yields several
useful insights. It defines a proper probabilistic model for the denoising autoencoder
technique which makes it in principle possible to sample from them or to rank examples
by their energy. It suggests a different way to apply score matching that is related to
learning to denoise and does not require computing second derivatives. It justifies the
use of tied weights between the encoder and decoder, and suggests ways to extend the
success of denoising autoencoders to a larger family of energy-based models.
1 Introduction
This note uncovers an unsuspected link between the score matching technique (Hyväri-
nen, 2005; Hyvärinen, 2008) for learning the parameters of unnormalized density mod-
els over continuous-valued data, and the training of denoising autoencoders (Vincent
et al. , 2008, 2010).
Score matching (SM) is an alternative to the maximum likelihood principle suitable
for unnormalized probability density models whose partition function is intractable. Its |
2202.08371.pdf | arXiv:2202.08371v1 [cs.LG] 15 Feb 2022THE QUARKS OF ATTENTION
PIERRE BALDI AND ROMAN VERSHYNIN
Abstract. Attention plays a fundamental role in both natural and artifi cial intelligence
systems. In deep learning, attention-based neural archite ctures, such as transformer archi-
tectures, are widely used to tackle problems in natural lang uage processing and beyond.
Here we investigate the fundamental building blocks of atte ntion and their computational
properties. Within the standard model of deep learning, we c lassify all possible fundamental
building blocks of attention in terms of their source, targe t, and computational mechanism.
We identify and study three most important mechanisms: addi tive activation attention, mul-
tiplicative output attention (output gating), and multipl icative synaptic attention (synaptic
gating). The gating mechanisms correspond to multiplicati ve extensions of the standard
model and are used across all current attention-based deep l earning architectures. We study
their functional properties and estimate the capacity of se veral attentional building blocks
in the case of linear and polynomial threshold gates. Surpri singly, additive activation atten-
tion plays a central role in the proofs of the lower bounds. At tention mechanisms reduce
the depth of certain basic circuits and leverage the power of quadratic activations without
incurring their full cost.
Keywords: neural networks; attention; transformers; capacity; comp lexity; deep learning.
Contents
1. Introduction 2
2. Sytematic Identification of Attention Quarks: Within and Beyond the Standard
Model 3
3. All you Need is Gating: Transformers 10
4. Functional Aspects of Attention 11
5. Cardinal Capacity Review 16
6. Capacity of Single Unit Attention 20
7. Capacity of Attention Layers 28
8. Conclusion 31
9. Appendix: Detailed Proof of Theorem 6.5 33
Acknowledgment 36
References 36
“Everyone knows what attention is... It is the taking possess ion by the mind in clear
and vivid form, of one out of what seem several simultaneousl y possible objects or trains of
thought...” William James, Principles of Psychology (1890).
Date : February 18, 2022.
1 |
2404.12358.pdf | Preprint
From rtoQ∗: Your Language Model is Secretly a Q-Function
Rafael Rafailov*
Stanford University
rafailov@stanford.eduJoey Hejna*
Stanford University
jhejna@stanford.eduRyan Park
Stanford University
rypark@stanford.edu
Chelsea Finn
Stanford University
cbfinn@stanford.edu
Abstract
Reinforcement Learning From Human Feedback (RLHF) has been a critical
to the success of the latest generation of generative AI models. In response
to the complex nature of the classical RLHF pipeline, direct alignment
algorithms such as Direct Preference Optimization (DPO) have emerged as
an alternative approach. Although DPO solves the same objective as the
standard RLHF setup, there is a mismatch between the two approaches.
StandardRLHFdeploysreinforcementlearninginaspecifictoken-levelMDP,
while DPO is derived as a bandit problem in which the whole response of the
model is treated as a single arm. In this work we rectify this difference, first
we theoretically show that we can derive DPO in the token-level MDP as a
general inverse Q-learning algorithm, which satisfies the Bellman equation.
Using our theoretical results, we provide three concrete empirical insights.
First, we show that because of its token level interpretation, DPO is able to
perform some type of credit assignment. Next, we prove that under the token
level formulation, classical search-based algorithms, such as MCTS, which
have recently been applied to the language generation space, are equivalent
to likelihood-based search on a DPO policy. Empirically we show that a
simple beam search yields meaningful improvement over the base DPO
policy. Finally, we show how the choice of reference policy causes implicit
rewards to decline during training. We conclude by discussing applications of
our work, including information elicitation in multi-tun dialogue, reasoning,
agentic applications and end-to-end training of multi-model systems.
1 Introduction
Reinforcement Learning from Human Feedback (RLHF) has become the defacto method for
aligning large language models (LLMs) with human intent due to its success in a wide range
of applications from summarization (Stiennon et al., 2022) to instruction following (Ouyang
et al., 2022). By learning a reward function from human-labeled comparisons, RLHF is able
to capture complex objectives that are in-describedable in practice. Following the success
of (Ziegler et al., 2020), numerous works have considered new algorithms for training and
sampling from large models in various domains using techniques from reinforcement learning
(RL). In particular direct alignment methods, such as Direct Preference Optimization (DPO)
(Rafailov et al., 2023) have gained traction in recent months because of their simplicity (Zhao
et al., 2023a; Azar et al., 2023). Instead of learning a reward function and then using RL,
direct alignment methods use the relationship between reward functions and policies in the
contextual bandit setting to optimize both simultaneously. Similar ideas have since been
applied to vision language (Zhao et al., 2023b) and image generation models (Lee et al., 2023).
*Denotes equal contribution
1arXiv:2404.12358v1 [cs.LG] 18 Apr 2024 |
2112.07868.pdf | Few-shot Instruction Prompts for Pretrained Language Models to Detect
Social Biases
Shrimai Prabhumoye1, Rafal Kocielnik2, Mohammad Shoeybi1,
Anima Anandkumar1,2, Bryan Catanzaro1
1NVIDIA,2California Institute of Technology
{sprabhumoye@nvidia.com, rafalko@caltech.edu}
Abstract
Warning: this paper contains content that may
be offensive or upsetting.
Detecting social bias in text is challenging due
to nuance, subjectivity, and difficulty in ob-
taining good quality labeled datasets at scale,
especially given the evolving nature of so-
cial biases and society. To address these
challenges, we propose a few-shot instruction-
based method for prompting pre-trained lan-
guage models (LMs). We select a few class-
balanced exemplars from a small support
repository that are closest to the query to be
labeled in the embedding space. We then pro-
vide the LM with instruction that consists of
this subset of labeled exemplars, the query
text to be classified, a definition of bias, and
prompt it to make a decision. We demon-
strate that large LMs used in a few-shot con-
text can detect different types of fine-grained
biases with similar and sometimes superior ac-
curacy to fine-tuned models. We observe that
the largest 530B parameter model is signifi-
cantly more effective in detecting social bias
compared to smaller models (achieving at least
13% improvement in AUC metric compared
to other models). It also maintains a high
AUC (dropping less than 2%) when the labeled
repository is reduced to as few as 100samples.
Large pretrained language models thus make it
easier and quicker to build new bias detectors.
1 Introduction
Detecting social bias in text is of utmost importance
as stereotypes and biases can be projected through
language (Fiske, 1993). Detecting bias is challeng-
ing because it can be expressed through seemingly
innocuous statements which are implied and rarely
explicit, and the interpretation of bias can be sub-
jective leading to noise in labels. In this work, we
focus on detecting social bias in text as defined in
Sap et al. (2020) using few-shot instruction-based
prompting of pre-trained language models (LMs).Current approaches that detect bias require large
labeled datasets to train the models (Chung et al.,
2019; Waseem and Hovy, 2016; Zampieri et al.,
2019; Davidson et al., 2017a). Collecting such
labeled sets is an expensive process and hence
they are not easily available. Furthermore, most
of the prior work relies on finetuning (Sap et al.,
2020; Mandl et al., 2019; Zampieri et al., 2019)
neural architectures which is costly in case of
large LMs (Strubell et al., 2019) and access to
finetune large LMs may be limited (Brown et al.,
2020). Prior work on bias detection has not fo-
cused on modeling multiple types of biases across
datasets as it requires careful optimization to suc-
ceed (Hashimoto et al., 2017; Søgaard and Gold-
berg, 2016; Ruder, 2017). Finetuning a model
can also lead to over-fitting especially in case of
smaller train sets and to catastrophic forgetting of
knowledge present in the pre-trained model (Fatemi
et al., 2021). Moreover, finetuning approaches are
prone to be affected by noisy labels (Song et al.,
2022) which is especially an issue with datasets
for bias detection. The human labeling used to an-
notate these datasets can introduce bias and noisy
labels (Hovy and Prabhumoye, 2021).
We harness the knowledge present in large scale
pre-trained language models (Davison et al., 2019;
Zhou et al., 2020; Petroni et al., 2019; Zhong et al.,
2021; Shin et al., 2020) to detect a rich set of bi-
ases. Our method prompts the LM with a textual
post and labeled exemplars along with instructions
to detect bias in the given post. We explore the
capabilities of LMs to flexibly accommodate differ-
ent dimensions of bias without any finetuning and
with limited access to labeled samples (few-shot
classification).
Prompt-engineering plays a central role in
finetuning-free approaches (Liu et al., 2021b). It
is the process of creating a prompting function that
results in the best performance on the desired down-
stream task. Prompt-engineering can be performedarXiv:2112.07868v2 [cs.CL] 15 Apr 2022 |
2101.03288.pdf | How to Train Your Energy-Based Models
Yang Song yangsong@cs.stanford.edu
Stanford University
Diederik P. Kingma dpkingma@google.com
Google Research
Abstract
Energy-Based Models (EBMs), also known as non-normalized probabilistic models, specify
probability density or mass functions up to an unknown normalizing constant. Unlike
most other probabilistic models, EBMs do not place a restriction on the tractability of
the normalizing constant, thus are more flexible to parameterize and can model a more
expressive family of probability distributions. However, the unknown normalizing constant
of EBMs makes training particularly difficult. Our goal is to provide a friendly introduction
to modern approaches for EBM training. We start by explaining maximum likelihood
training with Markov chain Monte Carlo (MCMC), and proceed to elaborate on MCMC-free
approaches, including Score Matching (SM) and Noise Constrastive Estimation (NCE).
We highlight theoretical connections among these three approaches, and end with a brief
survey on alternative training methods, which are still under active research. Our tutorial
is targeted at an audience with basic understanding of generative models who want to apply
EBMs or start a research project in this direction.
1. Introduction
Probabilistic models with a tractable likelihood are a double-edged sword. On one hand, a
tractable likelihood allows for straightforward comparison between models, and straightfor-
ward optimization of the model parameters w.r.t. the log-likelihood of the data. Through
tractable models such as autoregressive (Graves, 2013; Germain et al., 2015; Van Oord et al.,
2016) or flow-based generative models (Dinh et al., 2014, 2016; Rezende and Mohamed,
2015), we can learn flexible models of high-dimensional data. In some cases even though
the likelihood is not completely tractable, we can often compute and optimize a tractable
lower bound of the likelihood, as in the framework of variational autoencoders (Kingma and
Welling, 2014; Rezende et al., 2014).
Still, the set of models with a tractable likelihood is constrained. Models with a tractable
likelihood need to be of a certain form: for example, in case of autoregressive models, the
model distribution is factorized as a product of conditional distributions, and in flow-based
generative models the data is modeled as an invertible transformation of a base distribution.
In case of variational autoencoders, the data must be modeled as a directed latent-variable
model. A tractable likelihood is related to the fact that these models assume that exact
synthesis of pseudo-data from the model can be done with a specified, tractable procedure.
These assumptions are not always natural.
Energy-based models (EBM) are much less restrictive in functional form: instead of speci-
fying a normalized probability, they only specify the unnormalized negative log-probability,
1arXiv:2101.03288v2 [cs.LG] 17 Feb 2021 |
2303.07487v2.pdf | Using VAEs to Learn Latent Variables: Observations on
Applications in cryo-EM
Edelberg, Daniel G.
Yale UniversityLederman, Roy R.
Yale University
May 12, 2023
Abstract
Variational autoencoders (VAEs) are a popular generative model used to approximate distributions.
The encoder part of the VAE is used in amortized learning of latent variables, producing a latent rep-
resentation for data samples. Recently, VAEs have been used to characterize physical and biological
systems. In this case study, we qualitatively examine the amortization properties of a VAE used in
biological applications. We find that in this application the encoder bears a qualitative resemblance to
more traditional explicit representation of latent variables.
1 Introduction
Variational Autoencoders (VAEs) provide a deep learning method for efficient approximate inference for
problems with continuous latent variables. A brief reminder about VAEs is presented in Section 2.1; a more
complete description can be found, inter alia, in [1, 2, 3, 4, 5, 6]. Since their introduction, VAEs have found
success in a wide variety of fields. Recently, they have been used in scientific applications and physical
systems [7, 8, 9, 10, 11].
Given a set of data x={xi}, VAEs simultaneously learn an encoder Enc ξthat expresses a conditional
distribution qξ(z|x) of a latent variable zigiven a sample xi, and a decoder Dec θwhich expresses the
conditional distribution pθ(x|z). They are trained using empirical samples to approximate the distribution
pθ(x,z).
In this work we focus on the properties of the encoder distribution qξ(z|x) that arise as an approximation of
the distribution pθ(z|x). A single encoder qξ(z|x) is optimized to be able to produce the distribution of latent
variablezfor any input x, which is a form of amortization. Intuitively, one might expect that the encoder
qξ(z|x) would generalize well to plausible inputs that it has not encountered during the optimization/training
procedure. Indeed, this generalization is observed in many applications, and the ability of the encoder
to compute the latent variables for new unseen data points is used in some applications. In addition,
the variational construction sidesteps a statistical problem by marginalizing over the latent variables to
approximate the maximum-likelihood estimator (MLE) for some parameters θof the distribution pθ(x,z),
rather than θandthe latent variables ziassociated with each sample xi. In the latter case, the number
of variables grows with the number of samples and the estimates of pθ(x,z) may not converge to the true
solution.
We present a qualitative case study of the amortization in VAEs in a physical problem, looking at a VAE
applied to the problem of continuous heterogeneity in cryo-electron microscopy (cryo-EM), implemented in
CryoDRGN [7]. We examine the hypothesis that the encoder in this VAE generalizes well to previously unseen
data, and we compare the use of a VAE to the use of an explicit variational estimation of the distribution of
the latent variables. In order to study the generalization in a realistic environment, we exploit well-known
invariances and approximate invariances in cryo-EM data to produce natural tests.
Our case study suggests that in this case the encoder does not seem to generalize well; this can arguably
be interpreted as a form of overfitting of the data. Furthermore, we find that using explicit latent variables
1arXiv:2303.07487v2 [stat.ML] 10 May 2023 |
2205.12365.pdf | Low-rank Optimal Transport:
Approximation, Statistics and Debiasing
Meyer Scetbon
CREST, ENSAE
meyer.scetbon@ensae.frMarco Cuturi
Apple and CREST, ENSAE
cuturi@apple.com
Abstract
The matching principles behind optimal transport (OT) play an increasingly impor-
tant role in machine learning, a trend which can be observed when OT is used to
disambiguate datasets in applications (e.g. single-cell genomics) or used to improve
more complex methods (e.g. balanced attention in transformers or self-supervised
learning). To scale to more challenging problems, there is a growing consensus that
OT requires solvers that can operate on millions, not thousands, of points. The low-
rank optimal transport (LOT) approach advocated in Scetbon et al. [2021] holds
several promises in that regard, and was shown to complement more established
entropic regularization approaches, being able to insert itself in more complex
pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to
those that have a low-nonnegative rank, yielding linear time algorithms in cases
of interest. However, these promises can only be fulfilled if the LOT approach
is seen as a legitimate contender to entropic regularization when compared on
properties of interest, where the scorecard typically includes theoretical properties
(statistical complexity and relation to other methods) or practical aspects (debiasing,
hyperparameter tuning, initialization). We target each of these areas in this paper
in order to cement the impact of low-rank approaches in computational OT.
1 Introduction
Optimal transport (OT) is used across data-science to put in correspondence different sets of observa-
tions. These observations may come directly from datasets, or, in more advanced applications, depict
intermediate layered representations of data. OT theory provides a single grammar to describe and
solve increasingly complex matching problems (linear, quadratic, regularized, unbalanced, etc...),
making it gain a stake in various areas of science such as as single-cell biology Schiebinger et al.
[2019], Yang et al. [2020], Demetci et al. [2020], imaging Schmitz et al. [2018], Heitz et al. [2020],
Zheng et al. [2020] or neuroscience Janati et al. [2020], Koundal et al. [2020].
Regularized approaches to OT. Solving OT problems at scale poses, however, formidable chal-
lenges. The most obvious among them is computational: the Kantorovich [1942] problem on discrete
measures of size nis a linear program that requires O(n3logn)operations to be solved. A second
and equally important challenge lies in the estimation of OT in high-dimensional settings, since it
suffers from the curse-of-dimensionality Fournier and Guillin [2015]. The advent of regularized
approaches, such as entropic regularization [Cuturi, 2013], has pushed these boundaries thanks for
faster algorithms [Chizat et al., 2020, Clason et al., 2021] and improved statistical aspects [Genevay
et al., 2018a]. Despite these clear strengths, regularized OT solvers remain, however, costly as they
typically scale quadratically in the number of observations.
Scaling up OT using low-rank couplings. While it is always intuitively possible to reduce the size
of measures (e.g. using k-means) prior to solving an OT between them, a promising line of work
proposes to combine both [Forrow et al., 2019, Scetbon et al., 2021, 2022]. Conceptually, these
Preprint. Under review.arXiv:2205.12365v2 [stat.ML] 15 Sep 2022 |
2207.06569.pdf | Benign, Tempered, or Catastrophic:
A Taxonomy of Over/f_itting
Neil Mallinar∗
UC San Diego
nmallina@ucsd.eduJames B. Simon∗
UC Berkeley
james.simon@berkeley.eduAmirhesam Abedsoltan
UC San Diego
aabedsoltan@ucsd.edu
Parthe Pandit
UC San Diego
parthepandit@ucsd.eduMikhail Belkin
UC San Diego
mbelkin@ucsd.eduPreetum Nakkiran
Apple & UC San Diego
preetum@apple.com
Abstract
The practical success of overparameterized neural networks has motivated the recent scienti/f_ic study of interpo-
lating methods , which perfectly /f_it their training data. Certain interpolating methods, including neural networks,
can /f_it noisy training data without catastrophically bad test performance, in de/f_iance of standard intuitions from
statistical learning theory. Aiming to explain this, a body of recent work has studied benign over/f_itting , a phenomenon
where some interpolating methods approach Bayes optimality, even in the presence of noise. In this work we argue
that while benign over/f_itting has been instructive and fruitful to study, many real interpolating methods like neural
networks do not /f_it benignly : modest noise in the training set causes nonzero (but non-in/f_inite) excess risk at test time,
implying these models are neither benign nor catastrophic but rather fall in an intermediate regime. We call this
intermediate regime tempered over/f_itting , and we initiate its systematic study. We /f_irst explore this phenomenon in the
context of kernel (ridge) regression (KR) by obtaining conditions on the ridge parameter and kernel eigenspectrum
under which KR exhibits each of the three behaviors. We /f_ind that kernels with powerlaw spectra, including Laplace
kernels and ReLU neural tangent kernels, exhibit tempered over/f_itting. We then empirically study deep neural
networks through the lens of our taxonomy, and /f_ind that those trained to interpolation are tempered, while those
stopped early are benign. We hope our work leads to a more re/f_ined understanding of over/f_itting in modern learning.
1 Introduction
In the last decade, the dramatic success of overparameterized deep neural networks (DNNs) has inspired the /f_ield
to reexamine the theoretical foundations of generalization. Classical statistical learning theory suggests that an
algorithm which interpolates (i.e. perfectly /f_its) its training data will typically catastrophically over/f_it at test time,
generalizing no better than a random function1
Figure 1c illustrates the catastrophic over/f_itting classically expected of an interpolating method. Defying this
picture, DNNs can interpolate their training data and generalize well nonetheless [Neyshabur et al., 2015, Zhang
et al., 2017], suggesting the need for a new theoretical paradigm within which to understand their over/f_itting.
This need motivated the identi/f_ication and study of benign over/f_itting using the terminology of [Bartlett et al.,
2020] (also called “harmless interpolation” [Muthukumar et al., 2020]), a phenomenon in which certain methods that
perfectly /f_it the training data still approach Bayes-optimal generalization in the limit of large trainset size. Intuitively
speaking, benignly-over/f_itting methods /f_it the target function globally, yet /f_it the noise only locally, and the addition
of more label noise does not asymptotically degrade generalization. Figure 1a illustrates a simple method that is
∗Co-/f_irst authors.
1There are various ways to formalize this prediction depending on the setting: it is a consequence of the “bias-variance tradeoff” in statistics,
the “bias-complexity tradeoff” in PAC learning, and “capacity control”-based generalization bounds in kernel ridge regression. .
1arXiv:2207.06569v2 [cs.LG] 20 Oct 2022 |
1909.08593v2.pdf | Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler∗Nisan Stiennon∗Jeffrey Wu Tom B. Brown
Alec Radford Dario Amodei Paul Christiano Geoffrey Irving
OpenAI
{dmz,nisan,jeffwu,tom,alec,damodei,paul,irving}@openai.com
Abstract
Reward learning enables the application of rein-
forcement learning (RL) to tasks where reward is
defined by human judgment, building a model of
reward by asking humans questions. Most work
on reward learning has used simulated environ-
ments, but complex information about values is of-
ten expressed in natural language, and we believe
reward learning for language is a key to making
RL practical and safe for real-world tasks. In this
paper, we build on advances in generative pretrain-
ing of language models to apply reward learning
to four natural language tasks: continuing text
with positive sentiment or physically descriptive
language, and summarization tasks on the TL;DR
and CNN/Daily Mail datasets. For stylistic con-
tinuation we achieve good results with only 5,000
comparisons evaluated by humans. For summa-
rization, models trained with 60,000 comparisons
copy whole sentences from the input but skip irrel-
evant preamble; this leads to reasonable ROUGE
scores and very good performance according to
our human labelers, but may be exploiting the fact
that labelers rely on simple heuristics.
1. Introduction
We would like to apply reinforcement learning to complex
tasks defined only by human judgment, where we can only
tell whether a result is good or bad by asking humans. To
do this, we can first use human labels to train a model of
reward, and then optimize that model. While there is a long
history of work learning such models from humans through
interaction, this work has only recently been applied to mod-
ern deep learning, and even then has only been applied to
relatively simple simulated environments (Christiano et al.,
2017; Ibarz et al., 2018; Bahdanau et al., 2018). By contrast,
real world settings in which humans need to specify com-
*Equal contribution. Correspondence to paul@openai.com.plex goals to AI agents are likely to both involve and require
natural language, which is a rich medium for expressing
value-laden concepts. Natural language is particularly im-
portant when an agent must communicate back to a human
to help provide a more accurate supervisory signal (Irving
et al., 2018; Christiano et al., 2018; Leike et al., 2018).
Natural language processing has seen substantial recent ad-
vances. One successful method has been to pretrain a large
generative language model on a corpus of unsupervised data,
then fine-tune the model for supervised NLP tasks (Dai and
Le, 2015; Peters et al., 2018; Radford et al., 2018; Khandel-
wal et al., 2019). This method often substantially outper-
forms training on the supervised datasets from scratch, and
a single pretrained language model often can be fine-tuned
for state of the art performance on many different super-
vised datasets (Howard and Ruder, 2018). In some cases,
fine-tuning is not required: Radford et al. (2019) find that
generatively trained models show reasonable performance
on NLP tasks with no additional training (zero-shot).
There is a long literature applying reinforcement learning to
natural language tasks. Much of this work uses algorithmi-
cally defined reward functions such as BLEU for translation
(Ranzato et al., 2015; Wu et al., 2016), ROUGE for summa-
rization (Ranzato et al., 2015; Paulus et al., 2017; Wu and
Hu, 2018; Gao et al., 2019b), music theory-based rewards
(Jaques et al., 2017), or event detectors for story generation
(Tambwekar et al., 2018). Nguyen et al. (2017) used RL
on BLEU but applied several error models to approximate
human behavior. Wu and Hu (2018) and Cho et al. (2019)
learned models of coherence from existing text and used
them as RL rewards for summarization and long-form gen-
eration, respectively. Gao et al. (2019a) built an interactive
summarization tool by applying reward learning to one ar-
ticle at a time. Experiments using human evaluations as
rewards include Kreutzer et al. (2018) which used off-policy
reward learning for translation, and Jaques et al. (2019)
which applied the modified Q-learning methods of Jaques
et al. (2017) to implicit human preferences in dialog. Yi
et al. (2019) learned rewards from humans to fine-tune dia-
log models, but smoothed the rewards to allow supervised
learning. We refer to Luketina et al. (2019) for a survey ofarXiv:1909.08593v2 [cs.CL] 8 Jan 2020 |
1406.2661.pdf | Generative Adversarial Nets
Ian J. Goodfellow, Jean Pouget-Abadie∗, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair†, Aaron Courville, Yoshua Bengio‡
D´epartement d’informatique et de recherche op ´erationnelle
Universit ´e de Montr ´eal
Montr ´eal, QC H3C 3J7
Abstract
We propose a new framework for estimating generative models via an adversar-
ial process, in which we simultaneously train two models: a generative model G
that captures the data distribution, and a discriminative model Dthat estimates
the probability that a sample came from the training data rather than G. The train-
ing procedure for Gis to maximize the probability of Dmaking a mistake. This
framework corresponds to a minimax two-player game. In the space of arbitrary
functionsGandD, a unique solution exists, with Grecovering the training data
distribution and Dequal to1
2everywhere. In the case where GandDare defined
by multilayer perceptrons, the entire system can be trained with backpropagation.
There is no need for any Markov chains or unrolled approximate inference net-
works during either training or generation of samples. Experiments demonstrate
the potential of the framework through qualitative and quantitative evaluation of
the generated samples.
1 Introduction
The promise of deep learning is to discover rich, hierarchical models [2] that represent probability
distributions over the kinds of data encountered in artificial intelligence applications, such as natural
images, audio waveforms containing speech, and symbols in natural language corpora. So far, the
most striking successes in deep learning have involved discriminative models, usually those that
map a high-dimensional, rich sensory input to a class label [14, 22]. These striking successes have
primarily been based on the backpropagation and dropout algorithms, using piecewise linear units
[19, 9, 10] which have a particularly well-behaved gradient . Deep generative models have had less
of an impact, due to the difficulty of approximating many intractable probabilistic computations that
arise in maximum likelihood estimation and related strategies, and due to difficulty of leveraging
the benefits of piecewise linear units in the generative context. We propose a new generative model
estimation procedure that sidesteps these difficulties.1
In the proposed adversarial nets framework, the generative model is pitted against an adversary: a
discriminative model that learns to determine whether a sample is from the model distribution or the
data distribution. The generative model can be thought of as analogous to a team of counterfeiters,
trying to produce fake currency and use it without detection, while the discriminative model is
analogous to the police, trying to detect the counterfeit currency. Competition in this game drives
both teams to improve their methods until the counterfeits are indistiguishable from the genuine
articles.
∗Jean Pouget-Abadie is visiting Universit ´e de Montr ´eal from Ecole Polytechnique.
†Sherjil Ozair is visiting Universit ´e de Montr ´eal from Indian Institute of Technology Delhi
‡Yoshua Bengio is a CIFAR Senior Fellow.
1All code and hyperparameters available at http://www.github.com/goodfeli/adversarial
1arXiv:1406.2661v1 [stat.ML] 10 Jun 2014 |
2402.10171.pdf | Data Engineering for Scaling Language Models to 128K Context
Yao FuκRameswar PandaηXinyao NiuµXiang YueπHannaneh HajishirziσYoon KimλHao Pengδ
κUniversity of EdinburghηMIT-IBM Watson AI LabµUniversity of MelbourneπOhio State University
σUniversity of WashingtonλMITδUIUC
yao.fu@ed.ac.uk yoonkim@mit.edu haopeng@illinois.edu
https://github.com/FranxYao/Long-Context-Data-Engineering
Abstract
We study the continual pretraining recipe for scal-
ing language models’ context lengths to 128K,
with a focus on data engineering. We hypoth-
esize that long context modeling, in particular
the ability to utilize information at arbitrary in-
put locations , is a capability that is mostly al-
ready acquired through large-scale pretraining,
and that this capability can be readily extended
to contexts substantially longer than seen during
training (e.g., 4K to 128K) through lightweight
continual pretraining on appropriate data mix-
ture. We investigate the quantity andquality of
the data for continual pretraining: (1) for quan-
tity, we show that 500 million to 5 billion to-
kens are enough to enable the model to retrieve
information anywhere within the 128K context;
(2) for quality, our results equally emphasize do-
main balance andlength upsampling . Concretely,
we find that na ¨ıvely upsampling longer data on
certain domains like books, a common practice
of existing work, gives suboptimal performance,
and that a balanced domain mixture is impor-
tant. We demonstrate that continual pretraining
of the full model on 1B-5B tokens of such data
is an effective and affordable strategy for scaling
the context length of language models to 128K.
Our recipe outperforms strong open-source long-
context models and closes the gap to frontier mod-
els like GPT-4 128K.
1. Introduction
A context window of 128K tokens enables large language
models to perform tasks that significantly beyond exist-
ing paradigm, such as multi-document question answer-
ing (Caciularu et al., 2023), repository-level code under-
standing (Bairi et al., 2023), long-history dialog model-
ing (Mazumder & Liu, 2024), and language model-powered
autonomous agents (Weng, 2023). A popular testbed forwhether models can actually utilize long context length
is the recent Needle-in-a-Haystack test (Kamradt, 2023),
which asks the model to precisely recite the information
in a given sentence where the sentence (the “needle”) is
placed in an arbitrary location of a 128K long document (the
“haystack”). In the open-source space, although works like
LongLoRA (Chen et al., 2023b) and YaRN-Mistral (Peng
et al., 2023) theoretically support 100K context, they are
not able to pass this test at such context lengths, as shown
in Fig. 1. Currently, only closed-source frontier models like
GPT-4 128K have demonstrated strong performance on the
Needle-in-a-Haystack test.
This work investigates data engineering methods for scaling
language models’ context lengths. Our objective is to con-
tinue pretraining the language model on appropriate data
mixtures such that it can pass the Needle-in-a-Haystack test
at 128K length. Given that most existing models are trained
on less than 4K context length (Touvron et al., 2023a) and
that attention has quadratic complexity, continual pretrain-
ing with full attention on much longer context lengths (we
train on 64K-80K context lengths) may seem prohibitively
costly at a first glance. However, we show that this is feasi-
ble under academic-level resources (see Table 2). We use
LLaMA-2 7B and 13B as our base models. We do not make
any significant change to model architecture other than ad-
justing the base of RoPE, as in Xiong et al. (2023). Our
major focus is the data recipe: what andhow much data is
able to well-adapt a model to pass the Needle-in-a-Haystack
test at 128K context length.
We hypothesize that the capability to utilize information at
arbitrary locations within long context length is (mostly)
already acquired during pretraining, even for models pre-
trained on substantially shorter 4K contexts. This hypothe-
sis is in contrast to existing works like Xiong et al. (2023);
XVerse (2024), which perform continual pretraining on a
large amount of data (400B tokens) to inject long-context-
modeling capabilities; in this strategy, the cost can be as
high as pre-training from scratch. In this work we show
that continual pretraining on a small amount of long-context
data, in our case, 1-5B tokens, can “unlock” a 7B model’s
1arXiv:2402.10171v1 [cs.CL] 15 Feb 2024 |
2402.03175v1.pdf | 1
THEMATRIX : A B AYESIAN LEARNING MODEL FOR LLM S
Siddhartha Dalal
Department of Statistics
Columbia University
The City of New York
sd2803@columbia.eduVishal Misra
Department of Computer Science
Columbia University
The City of New York
vishal.misra@columbia.edu
ABSTRACT
In this paper, we introduce a Bayesian learning model to understand the behavior of Large Language
Models (LLMs). We explore the optimization metric of LLMs, which is based on predicting the next
token, and develop a novel model grounded in this principle. Our approach involves constructing an
ideal generative text model represented by a multinomial transition probability matrix with a prior,
and we examine how LLMs approximate this matrix. We discuss the continuity of the mapping
between embeddings and multinomial distributions, and present the Dirichlet approximation theorem
to approximate any prior. Additionally, we demonstrate how text generation by LLMs aligns with
Bayesian learning principles and delve into the implications for in-context learning, specifically
explaining why in-context learning emerges in larger models where prompts are considered as
samples to be updated. Our findings indicate that the behavior of LLMs is consistent with Bayesian
Learning, offering new insights into their functioning and potential applications.
1 Introduction
The advent of LLMs, starting with GPT3 [ 2], has revolutionized the world of natural language processing, and the
introduction of ChatGPT [ 14] has taken the world by storm. There have been several approaches to try and understand
how these models work, and in particular how “few-shot" or “in context learning" works [ 10,11,9], and it is an ongoing
pursuit. In our work we look at the workings of an LLM from a novel standpoint, and develop a Bayesian model to
explain their behavior. We focus on the optimization metric of next token prediction for these LLMs, and use that to
build an abstract probability matrix which is the cornerstone of our model and analysis. We show in our paper that the
behavior of LLMs is consistent with Bayesian learning and explain many empirical observations of the LLMs using our
model.
1.1 Paper organization and our contributions
We first describe our approach at a high level, and in the rest of the paper get into the details of the approach. We
focus on the optimization metric of these LLMs, namely, predict the next token, and develop the model from there on.
We first describe the ideal generative text model (Section 2.1), and relate it to its representation of an abstract (and
enormous) multinomial transition probability matrix. We argue that the optimization metric results in these LLMs
learning to represent this probability matrix during training, and text generation is nothing but picking a multinomial
distribution from a specific row of this matrix. This matrix, however is infeasible to be represented by the LLMs, even
with billions of parameters, so the LLMs learn to approximate it. Further, the training data is a subset of the entire text
in the world, so the learnt matrix is an approximation and reflection of the matrix induced by the training data, rather
than the a representation of the ideal matrix. Next (Section 3), we relate the rows of this matrix to the embeddings of the
prompt and prove (Theorem 3.1) a result on the continuity of the mapping between the embeddings and the multinomial
distribution induced by the embedding. We then prove (Theorem 4.1) that any prior over multinomial distribution can
be represented as a finite mixture of Dirichlet distributions. We then argue, and demonstrate (Section 5.2) that text
∗The authors are listed in alphabetical order.arXiv:2402.03175v1 [cs.LG] 5 Feb 2024 |
2402.04845.pdf | AlphaFold Meets Flow Matching for Generating Protein Ensembles
Bowen Jing1Bonnie Berger1 2Tommi Jaakkola1
Abstract
The biological functions of proteins often de-
pend on dynamic structural ensembles. In this
work, we develop a flow-based generative mod-
eling approach for learning and sampling the
conformational landscapes of proteins. We re-
purpose highly accurate single-state predictors
such as AlphaFold and ESMFold and fine-tune
them under a custom flow matching framework
to obtain sequence-conditoned generative mod-
els of protein structure called Alpha FLOW and
ESM FLOW . When trained and evaluated on
the PDB, our method provides a superior com-
bination of precision and diversity compared to
AlphaFold with MSA subsampling. When fur-
ther trained on ensembles from all-atom MD,
our method accurately captures conformational
flexibility, positional distributions, and higher-
order ensemble observables for unseen proteins.
Moreover, our method can diversify a static
PDB structure with faster wall-clock convergence
to certain equilibrium properties than replicate
MD trajectories, demonstrating its potential as a
proxy for expensive physics-based simulations.
Code is available at https://github.com/
bjing2016/alphaflow .
1. Introduction
Proteins adopt complex three-dimensional structures, often
as members of structural ensembles with distinct states, col-
lective motions, and disordered fluctuations, to carry out
their biological functions. For example, conformational
changes are critical in the function of transporters, channels,
and enzymes, and the properties of equilibrium ensembles
help govern the strength and selectivity of molecular interac-
tions (Meller et al., 2023; V ¨ogele et al., 2023). While deep
learning methods such as AlphaFold (Jumper et al., 2021)
have excelled in the single-state modeling of experimental
protein structures, they fail to account for this conforma-
tional heterogeneity (Lane, 2023; Ourmazd et al., 2022).
1CSAIL, Massachusetts Institute of Technology2Department
of Mathematics, Massachusetts Institute of Technology. Corre-
spondence to: Bowen Jing <bjing@mit.edu >.Hence, a method which builds upon the level of accuracy of
single-structure predictors, but reveals underlying structural
ensembles, would be of great value to structural biologists.
Existing machine learning approaches for generating struc-
tural ensembles have focused on inference-time interven-
tions in AlphaFold that modify the multiple sequence
alignment (MSA) input (Del Alamo et al., 2022; Stein &
Mchaourab, 2022; Wayment-Steele et al., 2023), resulting in
a different structure prediction for each version of the MSA.
While these approaches have demonstrated some success,
they suffer from two key limitations. First, by operating on
the MSA, they cannot be generalized to structure predictors
based on protein language models (PLMs) such as ESMFold
(Lin et al., 2023) or OmegaFold (Wu et al., 2022), which
have grown in popularity due to their fast runtime and ease
of use. Secondly, these inference-time interventions do not
provide the capability to train on protein ensembles from
beyond the PDB—for example, ensembles from molecular
dynamics, which are of significant scientific interest but can
be extremely expensive to simulate (Shaw et al., 2010).
To address these limitations, in this work we combine Al-
phaFold and ESMFold with flow matching , a recent genera-
tive modeling framework (Lipman et al., 2022; Albergo &
Vanden-Eijnden, 2022), to propose a principled method for
sampling the conformational landscape of proteins. While
AlphaFold and ESMFold were originally developed and
trained as regression models that predict a single best protein
structure for a given MSA or sequence input, we develop
a strategy for repurposing them as (sequence-conditioned)
generative models of protein structure. This synthesis relies
on the key insight that iterative denoising frameworks (such
as diffusion and flow-matching) provide a general recipe
for converting regression models to generative models with
relatively little modification to the architecture and training
objective. Unlike inference-time MSA ablation, this strat-
egy applies equally well to PLM-based predictors and can
be used to train or fine-tune on arbitrary ensembles.
While flow matching has been well established for images,
its application to protein structures remains nascent (Bose
et al., 2023). Hence, we develop a custom flow matching
framework tailored to the architecture and training practices
of AlphaFold and ESMFold. Our framework leverages the
polymer-structured prior distribution from harmonic diffu-
1arXiv:2402.04845v1 [q-bio.BM] 7 Feb 2024 |
1506.00552.pdf | Coordinate Descent Converges Faster with the
Gauss-Southwell Rule Than Random Selection
Julie Nutini1, Mark Schmidt1, Issam H. Laradji1, Michael Friedlander2, Hoyt Koepke3
1University of British Columbia,2University of California, Davis,3Dato
Abstract
There has been significant recent work on the theory and application of randomized coordinate descent
algorithms, beginning with the work of Nesterov [ SIAM J. Optim., 22(2), 2012 ], who showed that a
random-coordinate selection rule achieves the same convergence rate as the Gauss-Southwell selection
rule. This result suggests that we should never use the Gauss-Southwell rule, because it is typically
much more expensive than random selection. However, the empirical behaviours of these algorithms
contradict this theoretical result: in applications where the computational costs of the selection rules
are comparable, the Gauss-Southwell selection rule tends to perform substantially better than random
coordinate selection. We give a simple analysis of the Gauss-Southwell rule showing that—except in
extreme cases—its convergence rate is faster than choosing random coordinates. We also (i) show that
exact coordinate optimization improves the convergence rate for certain sparse problems, (ii) propose a
Gauss-Southwell-Lipschitz rule that gives an even faster convergence rate given knowledge of the Lipschitz
constants of the partial derivatives, (iii) analyze the effect of approximate Gauss-Southwell rules, and
(iv) analyze proximal-gradient variants of the Gauss-Southwell rule.
1 Coordinate Descent Methods
There has been substantial recent interest in applying coordinate descent methods to solve large-scale op-
timization problems, starting with the seminal work of Nesterov [2012], who gave the first global rate-of-
convergence analysis for coordinate-descent methods for minimizing convex functions. This analysis suggests
that choosing a random coordinate to update gives the same performance as choosing the “best” coordi-
nate to update via the more expensive Gauss-Southwell (GS) rule. (Nesterov also proposed a more clever
randomized scheme, which we consider later in this paper.) This result gives a compelling argument to use
randomized coordinate descent in contexts where the GS rule is too expensive. It also suggests that there
is no benefit to using the GS rule in contexts where it is relatively cheap. But in these contexts, the GS
rule often substantially outperforms randomized coordinate selection in practice. This suggests that either
the analysis of GS is not tight, or that there exists a class of functions for which the GS rule is as slow as
randomized coordinate descent.
After discussing contexts in which it makes sense to use coordinate descent and the GS rule, we answer
this theoretical question by giving a tighter analysis of the GS rule (under strong-convexity and standard
smoothness assumptions) that yields the same rate as the randomized method for a restricted class of
functions, but is otherwise faster (and in some cases substantially faster). We further show that, compared
to the usual constant step-size update of the coordinate, the GS method with exact coordinate optimization
has a provably faster rate for problems satisfying a certain sparsity constraint (Section 5). We believe that
this is the first result showing a theoretical benefit of exact coordinate optimization; all previous analyses
show that these strategies obtain the same rate as constant step-size updates, even though exact optimization
tends to be faster in practice. Furthermore, in Section 6, we propose a variant of the GS rule that, similar
to Nesterov’s more clever randomized sampling scheme, uses knowledge of the Lipschitz constants of the
coordinate-wise gradients to obtain a faster rate. We also analyze approximate GS rules (Section 7), which
1arXiv:1506.00552v2 [math.OC] 28 Oct 2018 |
10.1016.j.acha.2021.12.009.pdf | Appl. Comput. Harmon. Anal. 59 (2022) 85–116
Contents lists available at ScienceDirect
Applied and Computational Harmonic Analysis
www.elsevier.com/locate/acha
Loss landscapes and optimization in over-parameterized
non-linear systems and neural networks
Chaoyue Liua, Libin Zhub,c, Mikhail Belkinc,∗
aDepartment of Computer Science and Engineering, The Ohio State University, United States of America
bDepartment of Computer Science and Engineering, University of California, San Diego, United States
of America
cHalicioğlu Data Science Institute, University of California, San Diego, United States of America
a r t i c l e i n f o a b s t r a c t
Article history:
Received 9 June 2021
Received in revised form 24
December 2021
Accepted 26 December 2021
Available online 10 January 2022
Communicated by David Donoho
Keywords:
Deep learning
Non-linear optimization
Over-parameterized models
PL∗conditionThe success of deep learning is due, to a large extent, to the remarkable effectiveness
of gradient-based optimization methods applied to large neural networks. The
purpose of this work is to propose a modern view and a general mathematical
framework for loss landscapes and efficient optimization in over-parameterized
machine learning models and systems of non-linear equations, a setting that
includes over-parameterized deep neural networks. Our starting observation is that
optimization landscapes corresponding to such systems are generally not convex,
even locally around a global minimum, a condition we call essential non-convexity .
We argue that instead they satisfy PL∗, a variant of the Polyak-Łojasiewicz
condition [32,25]o n most (but not all) of the parameter space, which guarantees
both the existence of solutions and efficient optimization by (stochastic) gradient
descent (SGD/GD). The PL∗condition of these systems is closely related to the
condition number of the tangent kernel associated to a non-linear system showing
how a PL∗-based non-linear theory parallels classical analyses of over-parameterized
linear equations. We show that wide neural networks satisfy the PL∗condition,
which explains the (S)GD convergence to a global minimum. Finally we propose a
relaxation of the PL∗condition applicable to “almost” over-parameterized systems.
© 2021 Elsevier Inc. All rights reserved.
1. Introduction
A singular feature of modern machine learning is a large number of trainable model parameters. Just in
the last few years we have seen state-of-the-art models grow from tens or hundreds of millions parameters
to much larger systems with hundreds billion [ 6]o r even trillions parameters [ 14]. Invariably these models
are trained by gradient descent based methods, such as Stochastic Gradient Descent (SGD) or Adam [ 19].
Why are these local gradient methods so effective in optimizing complex highly non-convex systems? In the
past few years an emerging understanding of gradient-based methods have started to focus on the insight
*Corresponding author.
E-mail address: mbelkin@ucsd.edu (M. Belkin).
https://doi.org/10.1016/j.acha.2021.12.009
1063-5203/© 2021 Elsevier Inc. All rights reserved. |
2309.02390.pdf | 5 September 2023
Explaining grokking through circuit efficiency
Vikrant Varma*, 1, Rohin Shah*, 1, Zachary Kenton1, János Kramár1and Ramana Kumar1
*Equal contributions,1Google DeepMind
One of the most surprising puzzles in neural network generalisation is grokking : a network with perfect
training accuracy but poor generalisation will, upon further training, transition to perfect generalisation.
Weproposethatgrokkingoccurswhenthetaskadmitsageneralisingsolutionandamemorisingsolution,
where the generalising solution is slower to learn but more efficient, producing larger logits with the
same parameter norm. We hypothesise that memorising circuits become more inefficient with larger
training datasets while generalising circuits do not, suggesting there is a critical dataset size at which
memorisationandgeneralisationareequallyefficient. Wemakeandconfirmfournovelpredictionsabout
grokking, providing significant evidence in favour of our explanation. Most strikingly, we demonstrate
two novel and surprising behaviours: ungrokking , in which a network regresses from perfect to low test
accuracy, and semi-grokking , in which a network shows delayed generalisation to partial rather than
perfect test accuracy.
1. Introduction
When training a neural network, we expect that once training loss converges to a low value, the
network will no longer change much. Power et al. (2021) discovered a phenomenon dubbed grokking
that drastically violates this expectation. The network first “memorises” the data, achieving low
and stable training loss with poor generalisation, but with further training transitions to perfect
generalisation. We are left with the question: why does the network’s test performance improve
dramatically upon continued training, having already achieved nearly perfect training performance?
Recent answers to this question vary widely, including the difficulty of representation learning (Liu
etal.,2022), thescaleofparametersatinitialisation(Liuetal.,2023), spikesinloss("slingshots")(Thi-
lak et al., 2022), random walks among optimal solutions (Millidge, 2022), and the simplicity of
the generalising solution (Nanda et al., 2023, Appendix E). In this paper, we argue that the last
explanation is correct, by stating a specific theory in this genre, deriving novel predictions from the
theory, and confirming the predictions empirically.
We analyse the interplay between the internal mechanisms that the neural network uses to
calculate the outputs, which we loosely call “circuits” (Olah et al., 2020). We hypothesise that there
are two families of circuits that both achieve good training performance: one which generalises well
(𝐶gen) and one which memorises the training dataset ( 𝐶mem). The key insight is that when there
are multiple circuits that achieve strong training performance, weight decay prefers circuits with high
“efficiency” , that is, circuits that require less parameter norm to produce a given logit value.
Efficiency answers our question above: if 𝐶genis more efficient than 𝐶mem, gradient descent can
reduce nearly perfect training loss even further by strengthening 𝐶genwhile weakening 𝐶mem, which
then leads to a transition in test performance. With this understanding, we demonstrate in Section 3
that three key properties are sufficient for grokking: (1) 𝐶gengeneralises well while 𝐶memdoes not,
(2)𝐶genis more efficient than 𝐶mem, and (3)𝐶genis learned more slowly than 𝐶mem.
Since𝐶gengeneralises well, it automatically works for any new data points that are added to
the training dataset, and so its efficiency should be independent of the size of the training dataset.
In contrast, 𝐶memmust memorise any additional data points added to the training dataset, and so
Corresponding author(s): vikrantvarma@deepmind.com, rohinmshah@deepmind.comarXiv:2309.02390v1 [cs.LG] 5 Sep 2023 |
10.1016.j.cell.2023.12.035.pdf | Article
Brain-wide neural activity underlying memory-
guided movement
Graphical abstract
Highlights
dAnatomy-guided activity recordings in multi-regional neural
circuits during behavior
dMovement encoding is strongest in the medulla, followed bythe midbrain and cortex
dChoice coding arises in a specific multi-regional circuitdistributed across the brain
dCoding of choice and action exhibit strong correlationsacross brain areasAuthors
Susu Chen, Yi Liu, Ziyue Aiden Wang, ...,Shaul Druckmann, Nuo Li, Karel Svoboda
Correspondence
shauld@stanford.edu (S.D.),
nuo.li@bcm.edu (N.L.),karel.svoboda@alleninstitute.org (K.S.)
In brief
A sparse neural network, distributed
across major brain compartments,produces tightly orchestrated activitypatterns underlying decision-making andmovement initiation.
Anatomy-guided multi-regional simultaneous recordings
Mesoscale activity map data
medulla > midbrain > cortexMovement encoding:
Choice coding is concentrated
in ALM projection zones
StriatumThalamusMidbrain
SelectivityALM inputALMChoice-related activity is
correlated across brain areas
Chen et al., 2024, Cell 187, 676–691
February 1, 2024 ª2024 The Authors. Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.035 ll
|
2309.14525.pdf | Preprint
ALIGNING LARGE MULTIMODAL MODELS
WITH FACTUALLY AUGMENTED RLHF
Zhiqing Sun∗♠, Sheng Shen∗♣, Shengcao Cao∗♢
Haotian Liu♡, Chunyuan Li♮, Yikang Shen△, Chuang Gan†∇△, Liang-Yan Gui†♢
Yu-Xiong Wang†♢, Yiming Yang†♠, Kurt Keutzer†♣, Trevor Darrell†♣
♣UC Berkeley,♠CMU,♢UIUC,♡UW–Madison,∇UMass Amherst
♮Microsoft Research,△MIT-IBM Watson AI Lab
ABSTRACT
Large Multimodal Models (LMM) are built across modalities and the misalign-
ment between two modalities can result in “hallucination”, generating textual out-
puts that are not grounded by the multimodal information in context. To address
the multimodal misalignment issue, we adapt the Reinforcement Learning from
Human Feedback (RLHF) from the text domain to the task of vision-language
alignment, where human annotators are asked to compare two responses and pin-
point the more hallucinated one, and the vision-language model is trained to max-
imize the simulated human rewards. We propose a new alignment algorithm
called Factually Augmented RLHF that augments the reward model with addi-
tional factual information such as image captions and ground-truth multi-choice
options, which alleviates the reward hacking phenomenon in RLHF and further
improves the performance. We also enhance the GPT-4-generated training data
(for vision instruction tuning) with previously available human-written image-
text pairs to improve the general capabilities of our model. To evaluate the pro-
posed approach in real-world scenarios, we develop a new evaluation benchmark
MMH AL-BENCH with a special focus on penalizing hallucinations. As the first
LMM trained with RLHF, our approach achieves remarkable improvement on the
LLaV A-Bench dataset with the 94% performance level of the text-only GPT-4
(while previous best methods can only achieve the 87% level), and an improve-
ment by 60% on MMH AL-BENCH over other baselines. We opensource our code,
model, data at https://llava-rlhf.github.io .
1 I NTRODUCTION
Large Language Models (LLMs; Brown et al. (2020); Chowdhery et al. (2022); OpenAI (2023)) can
delve into the multimodal realm either by further pre-training with image-text pairs (Alayrac et al.;
Awadalla et al., 2023) or by fine-tuning them with specialized vision instruction tuning datasets (Liu
et al., 2023a; Zhu et al., 2023), leading to the emergence of powerful Large Multimodal Models
(LMMs). Yet, developing LMMs faces challenges, notably the gap between the volume and quality
of multimodal data versus text-only datasets. Consider the LLaV A model (Liu et al., 2023a), which is
initialized from a pre-trained vision encoder (Radford et al., 2021) and an instruction-tuned language
model (Chiang et al., 2023). It is trained on just 150K synthetic image-based dialogues, which is
much less in comparison to the text-only models (Flan (Longpre et al., 2023) utilizing over 100M
examples spanning 1800 tasks. Such limitations in data can lead to misalignment between the vision
and language modalities. Consequently, LMMs may produce hallucinated outputs, which are not
accurately anchored to the context provided by images.
To mitigate the challenges posed by the scarcity of high-quality visual instruction tuning data for
LMM training, we introduce LLaVA-RLHF , a vision-language model trained for improved mul-
timodal alignment. One of our key contributions is the adaptation of the Reinforcement Learning
from Human Feedback (RLHF) (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a), a
general and scalable alignment paradigm that shows great success for text-based AI agents, to the
∗Equal contribution. Ordering is determined by dice rolling. †Equal advising.
1arXiv:2309.14525v1 [cs.CV] 25 Sep 2023 |
2306.12672.pdf | From Word Models to World Models:
Translating from Natural Language to the
Probabilistic Language of Thought
Lionel Wong1⋆, Gabriel Grand1⋆, Alexander K. Lew1, Noah D. Goodman2, Vikash K.
Mansinghka1, Jacob Andreas1, Joshua B. Tenenbaum1
⋆Equal contribution.
1MIT,2Stanford
Abstract
How does language inform our downstream thinking? In particular, how do humans make meaning from
language—and how can we leverage a theory of linguistic meaning to build machines that think in more
human-like ways? In this paper, we propose rational meaning construction , a computational framework
for language-informed thinking that combines neural models of language with probabilistic models for
rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language
into a probabilistic language of thought (PLoT)—a general-purpose symbolic substrate for probabilistic,
generative world modeling. Our architecture integrates two powerful computational tools that have not
previously come together: we model thinking with probabilistic programs , an expressive representation for
flexible commonsense reasoning; and we model meaning construction with large language models (LLMs),
which support broad-coverage translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework in action through examples covering
four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual
and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can
generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We
extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics
engines, and goal-directed planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of world models themselves.
We hope this work will help to situate contemporary developments in LLMs within a broader cognitive
picture of human language and intelligence, providing a roadmap towards AI systems that synthesize the
insights of both modern and classical computational perspectives.
1 Introduction
Language expresses the vast internal landscape of our thoughts. We use language to convey what we believe,
what we are uncertain about, and what we do not know. We talk about what we see in the world around
us, and what we imagine in real or wholly hypothetical futures. We discuss what we want and what we
plan to do, and dissect what others want and what we think they will do. We build and pass on new bodies
of knowledge in language—we ask questions and offer explanations, give commands and instructions, and
propose and refute theories. Some of these ideas can be expressed in part through other means. But language
stands apart for its flexibility and breadth, and its seeming proximity to our thoughts.
Whatislanguage? How does language get its meaning, and when should we say that a person or machine
knows, understands, and can use it? What is the relationship between language and the rest of general
cognition—what allows language to inform and support so much of thought? This paper focuses on these
questions as they relate to humanlanguage and thought, in computational terms. What integrated cognitive
theory can model how language relates to the other core systems of human cognition? If we seek to build AI
systems that emulate how humans talk and think, what architecture can integrate language robustly into
systems that support the full scope of our thought?
Code for the examples in this paper is available at: github.com/gabegrand/world-models .
Correspondence: co-primary authors ( zyzzyva@mit.edu, gg@mit.edu ); co-supervisors ( jda@mit.edu, jbt@mit.edu ).arXiv:2306.12672v2 [cs.CL] 23 Jun 2023 |
2210.17323.pdf | Published as a conference paper at ICLR 2023
GPTQ: A CCURATE POST-TRAINING QUANTIZATION
FOR GENERATIVE PRE-TRAINED TRANSFORMERS
Elias Frantar∗
IST AustriaSaleh Ashkboos
ETH ZurichTorsten Hoefler
ETH ZurichDan Alistarh
IST Austria & NeuralMagic
ABSTRACT
Generative Pre-trained Transformer models, known as GPT or OPT, set them-
selves apart through breakthrough performance across complex language mod-
elling tasks, but also by their extremely high computational and storage costs.
Specifically, due to their massive size, even inference for large, highly-accurate
GPT models may require multiple performant GPUs, which limits the usability
of such models. While there is emerging work on relieving this pressure via
model compression, the applicability and performance of existing compression
techniques is limited by the scale and complexity of GPT models. In this paper,
we address this challenge, and propose GPTQ, a new one-shot weight quantiza-
tion method based on approximate second-order information, that is both highly-
accurate and highly-efficient. Specifically, GPTQ can quantize GPT models with
175 billion parameters in approximately four GPU hours, reducing the bitwidth
down to 3 or 4 bits per weight, with negligible accuracy degradation relative to the
uncompressed baseline. Our method more than doubles the compression gains rel-
ative to previously-proposed one-shot quantization methods, preserving accuracy,
allowing us for the first time to execute an 175 billion-parameter model inside a
single GPU for generative inference. Moreover, we also show that our method
can still provide reasonable accuracy in the extreme quantization regime, in which
weights are quantized to 2-bit or even ternary quantization levels. We show ex-
perimentally that these improvements can be leveraged for end-to-end inference
speedups over FP16, of around 3.25x when using high-end GPUs (NVIDIA A100)
and 4.5x when using more cost-effective ones (NVIDIA A6000). The implemen-
tation is available at https://github.com/IST-DASLab/gptq .
1 I NTRODUCTION
Pre-trained generative models from the Transformer (Vaswani et al., 2017) family, commonly known
as GPT or OPT (Radford et al., 2019; Brown et al., 2020; Zhang et al., 2022), have shown break-
through performance for complex language modelling tasks, leading to massive academic and prac-
tical interest. One major obstacle to their usability is computational and storage cost, which ranks
among the highest for known models. For instance, the best-performing model variants, e.g. GPT3-
175B, have in the order of 175 billion parameters and require tens-to-hundreds of GPU years to
train (Zhang et al., 2022). Even the simpler task of inferencing over a pre-trained model, which is
our focus in this paper, is highly challenging: for instance, the parameters of GPT3-175B occupy
326GB (counting in multiples of 1024) of memory when stored in a compact float16 format. This
exceeds the capacity of even the highest-end single GPUs, and thus inference must be performed
using more complex and expensive setups, such as multi-GPU deployments.
Although a standard approach to eliminating these overheads is model compression , e.g. (Hoefler
et al., 2021; Gholami et al., 2021), surprisingly little is known about compressing such models for
inference. One reason is that more complex methods for low-bitwidth quantization or model prun-
ing usually require model retraining , which is extremely expensive for billion-parameter models.
Alternatively, post-training methods (Nagel et al., 2020; Wang et al., 2020; Hubara et al., 2020;
Nahshan et al., 2021), which compress the model in one shot, without retraining, would be very
appealing. Unfortunately, the more accurate variants of such methods (Li et al., 2021; Hubara et al.,
2021; Frantar et al., 2022) are complex and challenging to scale to billions of parameters (Yao et al.,
∗Corresponding author: elias.frantar@ist.ac.at
1arXiv:2210.17323v2 [cs.LG] 22 Mar 2023 |
10.1016.j.cell.2024.01.026.pdf | Article
Cryo-EM structures of the plant plastid-encoded
RNA polymerase
Graphical abstract
Highlights
dPlant chloroplast RNA polymerase comprises a catalytic
core and four peripheral modules
dThe scaffold module stabilizes the catalytic core and bridgesother modules
dThe protection module has SOD activity, and the RNAmodule recognizes RNA sequence
dThe regulation module likely controls transcription activity ofthe catalytic coreAuthors
Xiao-Xian Wu, Wen-Hui Mu, Fan Li, ...,Chanhong Kim, Fei Zhou, Yu Zhang
Correspondence
zhoufei@mail.hzau.edu.cn (F.Z.),
yzhang@cemps.ac.cn (Y.Z.)
In brief
The cryo-EM structures of Nicotiana
tabacum (tobacco) chloroplast RNA
polymerase apoenzyme and transcriptionelongation complexes reveal thecomposition, assembly, function, andevolution of the chloroplast transcriptionapparatus.
Regulation
module
Regulation
module
Wu et al., 2024, Cell 187, 1127–1144
February 29, 2024 ª2024 Elsevier Inc.
https://doi.org/10.1016/j.cell.2024.01.026 ll
|
10.1038.s41467-021-26529-9.pdf | ARTICLE
The generative capacity of probabilistic protein
sequence models
Francisco McGee1,2,3, Sandro Hauri4,5, Quentin Novinger2,5, Slobodan Vucetic4,5, Ronald M. Levy1,3,6,7,
Vincenzo Carnevale2,3✉& Allan Haldane1,7✉
Potts models and variational autoencoders (VAEs) have recently gained popularity as gen-
erative protein sequence models (GPSMs) to explore fitness landscapes and predict mutation
effects. Despite encouraging results, current model evaluation metrics leave unclear whetherGPSMs faithfully reproduce the complex multi-residue mutational patterns observed innatural sequences due to epistasis. Here, we develop a set of sequence statistics to assessthe “generative capacity ”of three current GPSMs: the pairwise Potts Hamiltonian, the VAE,
and the site-independent model. We show that the Potts model ’s generative capacity is
largest, as the higher-order mutational statistics generated by the model agree with thoseobserved for natural sequences, while the VAE ’s lies between the Potts and site-independent
models. Importantly, our work provides a new framework for evaluating and interpretingGPSM accuracy which emphasizes the role of higher-order covariation and epistasis, withbroader implications for probabilistic sequence models in general.https://doi.org/10.1038/s41467-021-26529-9 OPEN
1Center for Biophysics and Computational Biology, Temple University, Philadelphia 19122, USA.2Institute for Computational Molecular Science, Temple
University, Philadelphia 19122, USA.3Department of Biology, Temple University, Philadelphia 19122, USA.4Center for Hybrid Intelligence, Temple University,
Philadelphia 19122, USA.5Department of Computer & Information Sciences, Temple University, Philadelphia 19122, USA.6Department of Physics, Temple
University, Philadelphia 19122, USA.7Department of Chemistry, Temple University, Philadelphia 19122, USA.✉email: vincenzo.carnevale@temple.edu ;
allan.haldane@temple.edu
NATURE COMMUNICATIONS | (2021) 12:6302 | https://doi.org/10.1038/s41467-021-26529-9 | www.nature.com/naturecommunications 11234567890():,; |
2205.11916.pdf | Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
The University of Tokyo
t.kojima@weblab.t.u-tokyo.ac.jpShixiang Shane Gu
Google Research, Brain Team
Machel Reid
Google Research∗Yutaka Matsuo
The University of TokyoYusuke Iwasawa
The University of Tokyo
Abstract
Pretrained large language models (LLMs) are widely used in many sub-fields of
natural language processing (NLP) and generally known as excellent few-shot
learners with task-specific exemplars. Notably, chain of thought (CoT) prompting,
a recent technique for eliciting complex multi-step reasoning through step-by-
step answer examples, achieved the state-of-the-art performances in arithmetics
and symbolic reasoning, difficult system-2 tasks that do not follow the standard
scaling laws for LLMs. While these successes are often attributed to LLMs’
ability for few-shot learning, we show that LLMs are decent zero-shot reasoners
by simply adding “Let’s think step by step” before each answer. Experimental
results demonstrate that our Zero-shot-CoT, using the same single prompt template,
significantly outperforms zero-shot LLM performances on diverse benchmark
reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SV AMP),
symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date
Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot
examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and
GSM8K from 10.4% to 40.7% with large-scale InstructGPT model (text-davinci-
002), as well as similar magnitudes of improvements with another off-the-shelf
large model, 540B parameter PaLM. The versatility of this single prompt across
very diverse reasoning tasks hints at untapped and understudied fundamental
zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive
capabilities may be extracted by simple prompting. We hope our work not only
serves as the minimal strongest zero-shot baseline for the challenging reasoning
benchmarks, but also highlights the importance of carefully exploring and analyzing
the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning
datasets or few-shot exemplars.
1 Introduction
Scaling up the size of language models has been key ingredients of recent revolutions in natural
language processing (NLP) [Vaswani et al., 2017, Devlin et al., 2019, Raffel et al., 2020, Brown et al.,
2020, Thoppilan et al., 2022, Rae et al., 2021, Chowdhery et al., 2022]. The success of large language
models (LLMs) is often attributed to (in-context) few-shot or zero-shot learning. It can solve various
tasks by simply conditioning the models on a few examples (few-shot) or instructions describing the
task (zero-shot). The method of conditioning the language model is called “prompting” [Liu et al.,
2021b], and designing prompts either manually [Schick and Schütze, 2021, Reynolds and McDonell,
2021] or automatically [Gao et al., 2021, Shin et al., 2020] has become a hot topic in NLP.
∗Work done while at The University of Tokyo.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2205.11916v4 [cs.CL] 29 Jan 2023 |
2308.06259v3.pdf | Published as a conference paper at ICLR 2024
SELF-ALIGNMENT WITH INSTRUCTION BACKTRANS -
LATION
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer
Jason Weston &Mike Lewis
Meta
{xianl,jase,mikelewis}@meta.com
ABSTRACT
We present a scalable method to build a high quality instruction following language
model by automatically labelling human-written text with corresponding instruc-
tions. Our approach, named instruction backtranslation , starts with a language
model finetuned on a small amount of seed data, and a given web corpus. The seed
model is used to construct training examples by generating instruction prompts
for web documents ( self-augmentation ), and then selecting high quality examples
from among these candidates ( self-curation ). This data is then used to finetune
a stronger model. Finetuning LLaMa on two iterations of our approach yields a
model that outperforms all other LLaMa-based models on the Alpaca leaderboard
not relying on distillation data, demonstrating highly effective self-alignment.
1 I NTRODUCTION
Aligning large language models (LLMs) to perform instruction following typically requires finetuning
on large amounts of human-annotated instructions or preferences (Ouyang et al., 2022; Touvron
et al., 2023a; Bai et al., 2022a) or distilling outputs from more powerful models (Wang et al., 2022a;
Honovich et al., 2022; Taori et al., 2023; Chiang et al., 2023; Peng et al., 2023; Xu et al., 2023).
Recent work highlights the importance of human-annotation data quality (Zhou et al., 2023; Köpf
et al., 2023). However, annotating instruction following datasets with such quality is hard to scale.
In this work, we instead leverage large amounts of unlabelled data to create a high quality instruction
tuning dataset by developing an iterative self-training algorithm. The method uses the model itself
to both augment and curate high quality training examples to improve its own performance. Our
approach, named instruction backtranslation , is inspired by the classic backtranslation method from
machine translation, in which human-written target sentences are automatically annotated with
model-generated source sentences in another language (Sennrich et al., 2015).
Our method starts with a seed instruction following model and a web corpus. The model is first used
toself-augment its training set: for each web document, it creates an instruction following training
example by predicting a prompt (instruction) that would be correctly answered by (a portion of) that
document. Directly training on such data (similarly to Köksal et al. (2023)) gives poor results in our
experiments, both because of the mixed quality of human written web text, and noise in the generated
instructions. To remedy this, we show that the same seed model can be used to self-curate the set of
newly created augmentation data by predicting their quality, and can then be self-trained on only the
highest quality (instruction, output) pairs. The procedure is then iterated, using the improved model
to better curate the instruction data, and re-training to produce a better model.
Our resulting model, Humpback , outperforms all other existing non-distilled models on the Alpaca
leaderboard (Li et al., 2023). Overall, instruction backtranslation is a scalable method for enabling
language models to improve their own ability to follow instructions.
2 M ETHOD
Our self-training approach assumes access to a base language model, a small amount of seed data,
and a collection of unlabelled examples, e.g. a web corpus. The unlabelled data is a large, diverse set
1arXiv:2308.06259v3 [cs.CL] 12 Mar 2024 |
2209.12892.pdf | LEARNING TO LEARN WITH GENERATIVE MODELS OF
NEURAL NETWORK CHECKPOINTS
William Peebles∗Ilija Radosavovic∗Tim Brooks Alexei A. Efros Jitendra Malik
University of California, Berkeley
ABSTRACT
We explore a data-driven approach for learning to optimize neural networks. We
construct a dataset of neural network checkpoints and train a generative model on
the parameters. In particular, our model is a conditional diffusion transformer that,
given an initial input parameter vector and a prompted loss, error, or return, predicts
the distribution over parameter updates that achieve the desired metric. At test
time, it can optimize neural networks with unseen parameters for downstream tasks
in just one update. We find that our approach successfully generates parameters
for a wide range of loss prompts. Moreover, it can sample multimodal parameter
solutions and has favorable scaling properties. We apply our method to different
neural network architectures and tasks in supervised and reinforcement learning.
1 I NTRODUCTION
Gradient-based optimization is the fuel of modern deep learning. Techniques of this class, such
as SGD (Robbins & Monro, 1951) and Adam (Kingma & Ba, 2015), are easy to implement, scale
reasonably well and converge to surprisingly good solutions—even in high-dimensional, non-convex
neural network loss landscapes. Over the past decade, they have enabled impressive results in
computer vision (Krizhevsky et al., 2012; Girshick et al., 2014), natural language processing (Vaswani
et al., 2017; Radford et al., 2018) and audio generation (Van Den Oord et al., 2016).
While these manual optimization techniques have led to large advances, they suffer from an important
limitation: they are unable to improve from past experience. For example, SGD will not converge
any faster when used to optimize the same neural network architecture from the same initialization
the 100th time versus the first time. Learned optimizers capable of leveraging their past experiences
have the potential to overcome this limitation and may accelerate future progress in deep learning.
Of course, the concept of learning improved optimizers is not new and dates back to the 1980s, if not
earlier, following early work from Schmidhuber (1987) and Bengio et al. (1991). In recent years, sig-
nificant effort has been spent on designing algorithms that learn via nested meta-optimization, where
the inner loop optimizes the task-level objective and the outer loop learns the optimizer (Andrychow-
icz et al., 2016; Li & Malik, 2016; Finn et al., 2017). In some instances, these approaches outperform
manual optimizers. However, they are challenging to train in practice due to a reliance on unrolled
optimization and reinforcement learning.
Taking a modern deep learning perspective suggests a simple, scalable and data-driven approach to
this problem. Over the past decade, our community has trained a massive number of checkpoints.
These checkpoints contain a wealth of information: diverse parameter configurations and rich metrics
such as test losses, classification errors and RL returns that describe the quality of the checkpoint.
Instead of leveraging large-scale datasets of images or text, we propose learning from large-scale
datasets of checkpoints recorded over the course of many training runs.
To this end, we create a dataset of neural network checkpoints (Figure 1, left). Our dataset consists of
23 million checkpoints from over a hundred thousand training runs. We collect data from supervised
learning tasks (MNIST, CIFAR-10) as well as reinforcement learning tasks (Cartpole), and across
different neural network architectures (MLPs, CNNs). In addition to parameters, we record relevant
task-level metrics in each checkpoint, such as test losses and classification errors.
*Equal contribution. Code, data and pre-trained models are available on our project page.
1arXiv:2209.12892v1 [cs.LG] 26 Sep 2022 |
2023.findings-acl.426.pdf | Findings of the Association for Computational Linguistics: ACL 2023 , pages 6810–6828
July 9-14, 2023 ©2023 Association for Computational Linguistics
“Low-Resource” Text Classification: A Parameter-Free Classification
Method with Compressors
Zhiying Jiang1,2, Matthew Y.R. Yang1, Mikhail Tsirlin1,
Raphael Tang1, Yiqin Dai2and Jimmy Lin1
1University of Waterloo2AFAIK
{zhiying.jiang, m259yang, mtsirlin, r33tang}@uwaterloo.ca
quinn@afaik.io jimmylin@uwaterloo.ca
Abstract
Deep neural networks (DNNs) are often used
for text classification due to their high accu-
racy. However, DNNs can be computationally
intensive, requiring millions of parameters and
large amounts of labeled data, which can make
them expensive to use, to optimize, and to trans-
fer to out-of-distribution (OOD) cases in prac-
tice. In this paper, we propose a non-parametric
alternative to DNNs that’s easy, lightweight,
and universal in text classification: a combi-
nation of a simple compressor like gzip with
ak-nearest-neighbor classifier. Without any
training parameters, our method achieves re-
sults that are competitive with non-pretrained
deep learning methods on six in-distribution
datasets. It even outperforms BERT on all five
OOD datasets, including four low-resource lan-
guages. Our method also excels in the few-shot
setting, where labeled data are too scarce to
train DNNs effectively. Code is available at
https://github.com/bazingagin/npc_gzip.
1 Introduction
Text classification, as one of the most fundamen-
tal tasks in natural language processing (NLP),
has improved substantially with the help of neu-
ral networks (Li et al., 2022). However, most neu-
ral networks are data-hungry, the degree of which
increases with the number of parameters. Hyper-
parameters must be carefully tuned for different
datasets, and the preprocessing of text data (e.g.,
tokenization, stop word removal) needs to be tai-
lored to the specific model and dataset. Despite
their ability to capture latent correlations and rec-
ognize implicit patterns (LeCun et al., 2015), com-
plex deep neural networks may be overkill for sim-
ple tasks such as topic classification, and lighter
alternatives are usually good enough. For exam-
ple, Adhikari et al. (2019b) find that a simple long
short-term memory network (LSTM; Hochreiter
and Schmidhuber, 1997) with appropriate regular-
ization can achieve competitive results. Shen et al.(2018) further show that even word-embedding-
based methods can achieve results comparable to
convolutional neural networks (CNNs) and recur-
rent neural networks (RNNs).
Among all the endeavors for a lighter alternative
to DNNs, one stream of work focuses on using com-
pressors for text classification. There have been
several studies in this field (Teahan and Harper,
2003; Frank et al., 2000), most of them based on
the intuition that the minimum cross entropy be-
tween a document and a language model of a class
built by a compressor indicates the class of the
document. However, previous works fall short of
matching the quality of neural networks.
Addressing these shortcomings, we propose a
text classification method combining a lossless
compressor, a compressor-based distance metric
with a k-nearest-neighbor classifier ( kNN). It uti-
lizes compressors in capturing regularity, which
is then translated into similarity scores by a
compressor-based distance metric. With the re-
sulting distance matrix, we use kNN to perform
classification. We carry out experiments on seven
in-distribution datasets and five out-of-distribution
ones. With a simple compressor like gzip, our
method achieves results competitive with those of
DNNs on six out of seven datasets and outperforms
all methods including BERT on all OOD datasets.
It also surpasses all models by a large margin under
few-shot settings.
Our contributions are as follows: (1) we are the
first to use NCD with kNN for topic classifica-
tion, allowing us to carry out comprehensive ex-
periments on large datasets with compressor-based
methods; (2) we show that our method achieves
results comparable to non-pretrained DNNs on six
out of seven in-distribution datasets; (3) on OOD
datasets, we show that our method outperforms
all methods, including pretrained models such as
BERT; and (4) we demonstrate that our method ex-
cels in the few-shot setting of scarce labeled data.6810 |
1911.00172.pdf | Published as a conference paper at ICLR 2020
GENERALIZATION THROUGH MEMORIZATION :
NEAREST NEIGHBOR LANGUAGE MODELS
Urvashi Khandelwal†∗, Omer Levy‡, Dan Jurafsky†, Luke Zettlemoyer‡& Mike Lewis‡
†Stanford University
‡Facebook AI Research
{urvashik,jurafsky }@stanford.edu
{omerlevy,lsz,mikelewis }@fb.com
ABSTRACT
We introduce kNN-LMs, which extend a pre-trained neural language model (LM)
by linearly interpolating it with a k-nearest neighbors ( kNN) model. The near-
est neighbors are computed according to distance in the pre-trained LM embed-
ding space, and can be drawn from any text collection, including the original LM
training data. Applying this augmentation to a strong W IKITEXT -103 LM, with
neighbors drawn from the original training set, our kNN-LM achieves a new state-
of-the-art perplexity of 15.79 – a 2.9 point improvement with no additional train-
ing. We also show that this approach has implications for efficiently scaling up to
larger training sets and allows for effective domain adaptation, by simply varying
the nearest neighbor datastore, again without further training. Qualitatively, the
model is particularly helpful in predicting rare patterns, such as factual knowl-
edge. Together, these results strongly suggest that learning similarity between se-
quences of text is easier than predicting the next word, and that nearest neighbor
search is an effective approach for language modeling in the long tail.
1 I NTRODUCTION
Neural language models (LMs) typically solve two subproblems: (1) mapping sentence prefixes to
fixed-sized representations, and (2) using these representations to predict the next word in the text
(Bengio et al., 2003; Mikolov et al., 2010). We present a new language modeling approach that is
based on the hypothesis that the representation learning problem may be easier than the prediction
problem. For example, any English speaker knows that Dickens is the author of andDickens wrote
will have essentially the same distribution over the next word, even if they do not know what that
distribution is. We provide strong evidence that existing language models, similarly, are much better
at the first problem, by using their prefix embeddings in a simple nearest neighbor scheme that
significantly improves overall performance.
We introduce kNN-LM, an approach that extends a pre-trained LM by linearly interpolating its next
word distribution with a k-nearest neighbors ( kNN) model. The nearest neighbors are computed
according to distance in the pre-trained embedding space and can be drawn from any text collec-
tion, including the original LM training data. This approach allows rare patterns to be memorized
explicitly, rather than implicitly in model parameters. It also improves performance when the same
training data is used for learning the prefix representations and the kNN model, strongly suggesting
that the prediction problem is more challenging than previously appreciated.
To better measure these effects, we conduct an extensive empirical evaluation. Applying our kNN
augmentation to a strong W IKITEXT -103 LM using only the original dataset achieves a new state-
of-the-art perplexity of 15.79 – a 2.86 point improvement over the base model (Baevski & Auli,
2019) – with no additional training. We also show that the approach has implications for efficiently
scaling up to larger training sets and allows for effective domain adaptation, by simply varying the
nearest neighbor datastore. Training a model on 100-million tokens and using kNN search over a
3-billion token dataset can outperform training the same model on all 3-billion tokens, opening a
∗Work done while the first author was interning at Facebook AI Research.
1arXiv:1911.00172v2 [cs.CL] 15 Feb 2020 |
2024.03.18.585544v1.full.pdf | 1
Towards Interpretable Cryo-EM: Disentangling
Latent Spaces of Molecular Conformations
David A. Klindt1,2,∗, Aapo Hyv ¨arinen3, Axel Levy1,4, Nina Miolane2and
Fr´ed´eric Poitevin1
1LCLS, SLAC National Accelerator Laboratory, Stanford University, CA, USA
2Department of Electrical and Computer Engineering, UCSB, CA, USA
3Department of Computer Science, University of Helsinki, Finland
4Department of Electrical Engineering, Stanford, CA, USA
Correspondence*:
David A. Klindt
klindt.david@gmail.com
ABSTRACT2
Molecules are essential building blocks of life and their different conformations (i.e., shapes) 3
crucially determine the functional role that they play in living organisms. Cryogenic Electron4
Microscopy (cryo-EM) allows for acquisition of large image datasets of individual molecules.5
Recent advances in computational cryo-EM have made it possible to learn latent variable models6
of conformation landscapes. However, interpreting these latent spaces remains a challenge7
as their individual dimensions are often arbitrary. The key message of our work is that this8
interpretation challenge can be viewed as an Independent Component Analysis (ICA) problem9
where we seek models that have the property of identifiability. That means, they have an10
essentially unique solution, representing a conformational latent space that separates the 11
different degrees of freedom a molecule is equipped with in nature. Thus, we aim to advance 12
the computational field of cryo-EM beyond visualizations as we connect it with the theoretical 13
framework of (nonlinear) ICA and discuss the need for identifiable models, improved metrics, and 14
benchmarks. Moving forward, we propose future directions for enhancing the disentanglement 15
of latent spaces in cryo-EM, refining evaluation metrics and exploring techniques that leverage 16
physics-based decoders of biomolecular systems. Moreover, we discuss how future technological 17
developments in time-resolved single particle imaging may enable the application of nonlinear ICA 18
models that can discover the true conformation changes of molecules in nature. The pursuit of 19
interpretable conformational latent spaces will empower researchers to unravel complex biological 20
processes and facilitate targeted interventions. This has significant implications for drug discovery 21
and structural biology more broadly. More generally, latent variable models are deployed widely 22
across many scientific disciplines. Thus, the argument we present in this work has much broader 23
applications in AI for science if we want to move from impressive nonlinear neural network models 24
to mathematically grounded methods that can help us learn something new about nature. 25
Keywords: cryo-EM, machine learning, ICA, AI for science, disentanglement, physics-based models 26
1(which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint this version posted March 19, 2024. ; https://doi.org/10.1101/2024.03.18.585544doi: bioRxiv preprint |
2309.03649.pdf | Exploring kinase DFG loop conformational
stability with AlphaFold2-RAVE
Bodhi P. Vani,†Akashnathan Aranganathan,‡and Pratyush Tiwary∗,¶,§
†Institute for Physical Science and Technology, University of Maryland, College Park,
Maryland 20742, USA
‡Biophysics Program and Institute for Physical Science and Technology, University of
Maryland, College Park 20742, USA
¶Department of Chemistry and Biochemistry and Institute for Physical Science and
Technology, University of Maryland, College Park 20742, USA
§Corresponding author
E-mail: ptiwary@umd.edu
Abstract
Kinases compose one of the largest fractions of the human proteome, and their
misfunction is implicated in many diseases, in particular cancers. The ubiquitousness
and structural similarities of kinases makes specific and effective drug design difficult.
In particular, conformational variability due to the evolutionarily conserved DFG mo-
tif adopting in and out conformations and the relative stabilities thereof are key in
structure-based drug design for ATP competitive drugs. These relative conformational
stabilities are extremely sensitive to small changes in sequence, and provide an impor-
tant problem for sampling method development. Since the invention of AlphaFold2, the
world of structure-based drug design has noticably changed. In spite of it being limited
to crystal-like structure prediction, several methods have also leveraged its underlying
1arXiv:2309.03649v1 [physics.bio-ph] 7 Sep 2023 |
NIPS-2007-active-preference-learning-with-discrete-choice-data-Paper.pdf | Active Preference Learning with Discrete Choice Data
Eric Brochu, Nando de Freitas and Abhijeet Ghosh
Department of Computer Science
University of British Columbia
Vancouver, BC, Canada
{ebrochu, nando, ghosh}@cs.ubc.ca
Abstract
We propose an active learning algorithm that learns a continuous valuation model
from discrete preferences. The algorithm automatically decides what items are
best presented to an individual in order to find the item that they value highly in
as few trials as possible, and exploits quirks of human psychology to minimize
time and cognitive burden. To do this, our algorithm maximizes the expected
improvement at each query without accurately modelling the entire valuation sur-
face, which would be needlessly expensive. The problem is particularly difficult
because the space of choices is infinite. We demonstrate the effectiveness of the
new algorithm compared to related active learning methods. We also embed the
algorithm within a decision making tool for assisting digital artists in rendering
materials. The tool finds the best parameters while minimizing the number of
queries.
1 Introduction
A computer graphics artist sits down to use a simple renderer to find appropriate surfaces for a
typical reflectance model. It has a series of parameters that must be set to control the simulation:
“specularity”, “Fresnel reflectance coefficient”, and other, less-comprehensible ones. The parame-
ters interact in ways difficult to discern. The artist knows in his mind’s eye what he wants, but he’s
not a mathematician or a physicist — no course he took during his MFA covered Fresnel reflectance
models. Even if it had, would it help? He moves the specularity slider and waits for the image
to be generated. The surface is too shiny. He moves the slider back a bit and runs the simulation
again. Better. The surface is now appropriately dull, but too dark. He moves a slider down. Now
it’s the right colour, but the specularity doesn’t look quite right any more. He repeatedly bumps the
specularity back up, rerunning the renderer at each attempt until it looks right. Good. Now, how to
make it look metallic...?
Problems in simulation, animation, rendering and other areas often take such a form, where the
desired end result is identifiable by the user, but parameters must be tuned in a tedious trial-and-
error process. This is particularly apparent in psychoperceptual models, where continual tuning is
required to make something “look right”. Using the animation of character walking motion as an
example, for decades, animators and scientists have tried to develop objective functions based on
kinematics, dynamics and motion capture data [Cooper et al., 2007 ]. However, even when expen-
sive mocap is available, we simply have to watch an animated film to be convinced of how far we
still are from solving the gait animation problem. Unfortunately, it is not at all easy to find a mapping
from parameterized animation to psychoperceptual plausibility. The perceptual objective function is
simply unknown. Fortunately, however, it is fairly easy to judge the quality of a walk — in fact, it is
trivial and almost instantaneous. The application of this principle to animation and other psychoper-
ceptual tools is motivated by the observation that humans often seem to be forming a mental model
of the objective function. This model enables them to exploit feasible regions of the parameter space
where the valuation is predicted to be high and to explore regions of high uncertainty. It is our the-
1 |
2206.14858.pdf | Solving Quantitative Reasoning Problems with
Language Models
Aitor Lewkowycz∗, Anders Andreassen†, David Dohan†, Ethan Dyer†, Henryk Michalewski†,
Vinay Ramasesh†, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo,
Yuhuai Wu, Behnam Neyshabur∗, Guy Gur-Ari∗, and Vedant Misra∗
Google Research
Abstract
Language models have achieved remarkable performance on a wide range of tasks that require natural
language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that
require quantitative reasoning, such as solving mathematics, science, and engineering problems at the
college level. To help close this gap, we introduce Minerva, a large language model pretrained on general
natural language data and further trained on technical content. The model achieves state-of-the-art
performance on technical benchmarks without the use of external tools. We also evaluate our model
on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other
sciences that require quantitative reasoning, and find that the model can correctly answer nearly a third
of them.
1 Introduction
Artificial neural networks have seen remarkable success in a variety of domains including computer vision,
speech recognition, audio and image generation, translation, game playing, and robotics. In particular, large
language models have achieved excellent performance across a variety of natural language tasks including
common-sense reasoning, question answering, and summarization (Raffel et al., 2019; Brown et al., 2020;
Rae et al., 2021; Smith et al., 2022; Chowdhery et al., 2022). However, these models have struggled with
tasks that require quantitative reasoning, such as solving mathematics, science, and engineering problems
(Hendrycks et al., 2021; Cobbe et al., 2021).
Quantitative reasoning problems are an interesting domain of application for language models because they
test the capability of models on several fronts. They require the solver to correctly parse a natural language
input, potentially recall world knowledge that pertains to the problem, and apply an algorithm or series of
computations to the information provided in order to arrive at a correct solution. They also require that the
solver is able to correctly parse and generate precise sequences of mathematical tokens, as well as apply a
computational procedure to tokens via symbolic and numerical manipulation. Finally, such problems are a
proving ground for research toward robust quantitative reasoning solvers that are useful in supporting the
work of humans in scientific and technical fields.
Previous research has shown that large language models achieve impressive performance on math and
programming questions after training on domain specific datasets (Chen et al., 2021; Austin et al., 2021;
∗Equal leadership and advising contribution
†Equal contribution
1arXiv:2206.14858v2 [cs.CL] 1 Jul 2022 |
1909.12264.pdf | Quantum Graph Neural Networks
Guillaume Verdon
X, The Moonshot Factory
Mountain View, CA
gverdon@x.teamTrevor McCourt
Google Research
Venice, CA
trevormccrt@google.com
Enxhell Luzhnica, Vikash Singh,
Stefan Leichenauer, Jack Hidary
X, The Moonshot Factory
Mountain View, CA
{enxhell,singvikash,
sleichenauer,hidary}@x.team
Abstract
We introduce Quantum Graph Neural Networks ( QGNN ), a new class of quantum
neural network ansatze which are tailored to represent quantum processes which
have a graph structure, and are particularly suitable to be executed on distributed
quantum systems over a quantum network. Along with this general class of ansatze,
we introduce further specialized architectures, namely, Quantum Graph Recurrent
Neural Networks ( QGRNN ) and Quantum Graph Convolutional Neural Networks
(QGCNN ). We provide four example applications of QGNN s: learning Hamiltonian
dynamics of quantum systems, learning how to create multipartite entanglement in
a quantum network, unsupervised learning for spectral clustering, and supervised
learning for graph isomorphism classification.
1 Introduction
Variational Quantum Algorithms are a promising class of algorithms that are rapidly emerging
as a central subfield of Quantum Computing [ 1,2,3]. Similar to parameterized transformations
encountered in deep learning, these parameterized quantum circuits are often referred to as Quantum
Neural Networks (QNNs). Recently, it was shown that QNNs that have no prior on their structure
suffer from a quantum version of the no-free lunch theorem [ 4] and are exponentially difficult to
train via gradient descent. Thus, there is a need for better QNN ansatze. One popular class of
QNNs has been Trotter-based ansatze [ 2,5]. The optimization of these ansatze has been extensively
studied in recent works, and efficient optimization methods have been found [ 6,7]. On the classical
side, graph-based neural networks leveraging data geometry have seen some recent successes in
deep learning, finding applications in biophysics and chemistry [ 8]. Inspired from this success, we
propose a new class of Quantum Neural Network ansatz which allows for both quantum inference
and classical probabilistic inference for data with a graph-geometric structure. In the sections below,
we introduce the general framework of the QGNN ansatz as well as several more specialized variants
and showcase four potential applications via numerical implementation.
Preprint. Under review.arXiv:1909.12264v1 [quant-ph] 26 Sep 2019 |
2403.08763.pdf | Simple and Scalable Strategies to Continually Pre-train
Large Language Models
Adam Ibrahim∗†⊚ibrahima@mila.quebec
Benjamin Thérien∗†⊚benjamin.therien@mila.quebec
Kshitij Gupta∗†⊚kshitij.gupta@mila.quebec
Mats L. Richter†⊚mats.richter@mila.quebec
Quentin Anthony♢†⊚qubitquentin@gmail.com
Timothée Lesort†⊚t.lesort@gmail.com
Eugene Belilovsky‡⊚eugene.belilovsky@concordia.ca
Irina Rish†⊚irina.rish@umontreal.ca
Department of Computer Science and Operation Research,
Université de Montréal, Montréal, Canada †
Department of Computer Science and Software Engineering,
Concordia University, Montréal, Canada ‡
Mila, Montréal, Canada ⊚
EleutherAI ♢
Abstract
Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start
the process over again once new data becomes available. A much more efficient solution is
to continually pre-train these models – saving significant compute compared to re-training.
However, the distribution shift induced by new data typically results in degraded performance
on previous data or poor adaptation to the new data. In this work, we show that a simple and
scalable combination of learning rate (LR) re-warming, LR re-decaying, and replay of previous
data is sufficient to match the performance of fully re-training from scratch on all available
data, as measured by final loss and language model (LM) evaluation benchmarks. Specifically,
we show this for a weak but realistic distribution shift between two commonly used LLM
pre-training datasets (English →English) and a stronger distribution shift (English →German)
at the 405M parameter model scale with large dataset sizes (hundreds of billions of tokens).
Selecting the weak but realistic shift for larger-scale experiments, we also find that our
continual learning strategies match the re-training baseline for a 10B parameter LLM. Our
results demonstrate that LLMs can be successfully updated via simple and scalable continual
learning strategies, matching the re-training baseline using only a fraction of the compute.
Finally, inspired by previous work, we propose alternatives to the cosine learning rate schedule
that help circumvent forgetting induced by LR re-warming and that are not bound to a fixed
token budget.
1 Introduction
Over the past few years, large pre-trained models have enabled massive performance improvements in
language modeling (Brown et al., 2020; Zhao et al., 2023), visual understanding (Radford et al., 2021; Alayrac
et al., 2022; Kirillov et al., 2023), text-to-image generation (Rombach et al., 2022; Pernias et al., 2024), and
text-to-video generation (Brooks et al., 2024)—to name a few. Large language models (LLMs) are at the
center of all these improvements, providing an intuitive means for humans to interface with machine learning
algorithms through language.
∗Equal contribution; authorship order within equal contributors was randomized.
1arXiv:2403.08763v1 [cs.LG] 13 Mar 2024 |
2310.02226.pdf | Think before you speak:
Training Language Models With Pause Tokens
Sachin Goyal∗
Machine Learning Department
Carnegie Mellon University
sachingo@andrew.cmu.eduZiwei Ji
Google Research, NY
ziweiji@google.comAnkit Singh Rawat
Google Research, NY
ankitsrawat@google.com
Aditya Krishna Menon
Google Research, NY
adityakmenon@google.comSanjiv Kumar
Google Research, NY
sanjivk@google.comVaishnavh Nagarajan
Google Research, NY
vaishnavh@google.com
Abstract
Language models generate responses by producing a series of tokens in immediate
succession: the (K+ 1)thtoken is an outcome of manipulating Khidden vectors
per layer, one vector per preceding token. What if instead we were to let the model
manipulate say, K+10 hidden vectors, before it outputs the (K+1)thtoken? We
operationalize this idea by performing training and inference on language mod-
els with a (learnable) pause token, a sequence of which is appended to the input
prefix. We then delay extracting the model’s outputs until the last pause token is
seen, thereby allowing the model to process extra computation before committing
to an answer. We empirically evaluate pause-training on decoder-only models
of 1B and 130M parameters with causal pretraining on C4, and on downstream
tasks covering reasoning, question-answering, general understanding and fact re-
call. Our main finding is that inference-time delays show gains on our tasks when
the model is both pre-trained and finetuned with delays. For the 1B model, we
witness gains on eight tasks, most prominently, a gain of 18% EM score on the
QA task of SQuAD, 8%on CommonSenseQA and 1%accuracy on the reason-
ing task of GSM8k. Our work raises a range of conceptual and practical future
research questions on making delayed next-token prediction a widely applicable
new paradigm.
1 Introduction
Transformer-based causal language models generate tokens one after the other in immediate succes-
sion. To generate the (K+ 1)thtoken, the model consumes the Kprevious tokens, and proceeds
layer by layer, computing Kintermediate vectors in each hidden layer. Each vector in itself is the
output of a module (consisting of self-attention and multi-layer-perceptrons) operating on the pre-
vious layer’s output vectors. However sophisticated this end-to-end process may be, it abides by a
peculiar constraint: the number of operations determining the next token is limited by the number
of tokens seen so far. Arguably, this was the most natural design choice when the Transformer was
first conceived by Vaswani et al. (2017). But in hindsight, one may wonder whether for some inputs,
the(K+ 1)thtoken demands K+MTransformer operations in each layer (for M > 0), which
cannot be met by the arbitrarily constrained Koperations per layer. This paper explores one way to
free the Transformer of this arbitrary per-layer computational constraint.
The approach we study is to append dummy tokens into a decoder-only model’s input, thereby de-
laying the model’s output. Specifically, we select a (learnable) pause token (denoted <pause> ) and
append one or more copies of <pause> as a sequence to the input. We simply ignore the model’s cor-
responding outputs until the last <pause> token is seen, after which we begin extracting its response.
∗Work done in part as a Student Researcher at Google.
1arXiv:2310.02226v1 [cs.CL] 3 Oct 2023 |
2212.00178.pdf | Open Relation and Event Type Discovery with Type Abstraction
Sha Li, Heng Ji, Jiawei Han
University of Illinois Urbana-Champaign
{shal2, hengji, hanj}@illinois.edu
Abstract
Conventional “closed-world" information ex-
traction (IE) approaches rely on human ontolo-
gies to define the scope for extraction. As
a result, such approaches fall short when ap-
plied to new domains. This calls for systems
that can automatically infer new types from
given corpora, a task which we refer to as type
discovery . To tackle this problem, we intro-
duce the idea of type abstraction, where the
model is prompted to generalize and name the
type. Then we use the similarity between in-
ferred names to induce clusters. Observing
that this abstraction-based representation is of-
ten complementary to the entity/trigger token
representation, we set up these two represen-
tations as two views and design our model as
a co-training framework. Our experiments on
multiple relation extraction and event extrac-
tion datasets consistently show the advantage
of our type abstraction approach.
1 Introduction
Information extraction has enjoyed widespread suc-
cess, however, the majority of information extrac-
tion methods are “reactive”, relying on end-users
to specify their information needs in prior and pro-
vide supervision accordingly. This leads to “closed-
world” systems (Lin et al., 2020; Du and Cardie,
2020; Li et al., 2021; Zhong and Chen, 2021; Ye
et al., 2022) that are confined to a set of pre-defined
types. It is desirable to make systems act more
“proactively” like humans who are always on the
lookout for interesting new information, generalize
them into new types, and find more instances of
such types, even if they are not seen previously.
One related attempt is the Open Information Ex-
traction paradigm (Banko et al., 2008), which aims
at extracting all (subject, predicate, object) triples
from text that denote some kind of relation. While
OpenIE does not rely on pre-specified relations,
its exhaustive and free-form nature often leads to
noisy and redundant extractions.
<h>John</h> earned a bachelor’s degree from the <t>University of Wollongong</t>.Token ViewUniversity of Wollongong is the [MASK] of John. Mask ViewRelation: School_AttendedFigure 1: For each instance, the token view is computed
from the pre-trained LM embedding of the first token
in entity/trigger. The mask view is computed from the
[MASK] token embedding in the type prompt.
To bridge the gap between closed-world IE and
OpenIE, a vital step is for systems to possess the
ability of automatically inducing new types and
extracting instances of such new types. Under vari-
ous contexts, related methods have been proposed
under the name of “relation discovery” (Yao et al.,
2011; Marcheggiani and Titov, 2016),“open rela-
tion extraction” (Wu et al., 2019; Hu et al., 2020)
and “event type induction” (Huang and Ji, 2020;
Shen et al., 2021). In this paper, we unify such
terms and refer to the task as type discovery .
Type discovery can naturally be posed as a clus-
tering task. This heavily relies on defining an appro-
priate metric space where types are easily separable.
The token embedding space from pre-trained lan-
guage models is a popular choice, but as observed
by (Zhao et al., 2021), the original metric space
derived from BERT (Devlin et al., 2019) is often
prone to reflect surface form similarity rather than
the desired relation/event-centered similarity. One
way to alleviate this issue is to use known types
to help learn a similarity metric that can also be
applied to unknown types (Wu et al., 2019; Zhao
et al., 2021; Huang and Ji, 2020).
In this paper we introduce another idea of ab-
straction : a discovered type should have an ap-
propriate and concise type name. The human vo-
cabulary serves as a good repository of concepts
that appear meaningful to people. When we assign
a name to a cluster, we implicitly define the com-arXiv:2212.00178v1 [cs.CL] 30 Nov 2022 |
10.1016.j.cell.2023.12.037.pdf | Article
Xist ribonucleoproteins promote female sex-biased
autoimmunity
Graphical abstract
Highlights
dTransgenic mouse models inducibly express Xist in male
animals
dXist expression in males induces autoantibodies andautoimmune pathology
dXist in males reprograms T and B cell populations to female-like patterns
dAutoantibodies to Xist RNP characterize female-biasedautoimmune diseases in patientsAuthors
Diana R. Dou, Yanding Zhao,Julia A. Belk, ..., Anton Wutz, Paul J. Utz,Howard Y. Chang
Correspondence
howchang@stanford.edu
In brief
The Xist RNA protein complex, presentonly in females, is immunogenic and mayunderlie female-biased autoimmunity.
Dou et al., 2024, Cell 187, 733–749
February 1, 2024 ª2024 The Authors. Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.037 ll
|
2012.02296v2.pdf | Generative Capacity of Probabilistic Protein
Sequence Models
Francisco McGee1,2,4, Quentin Novinger2,5, Ronald M Levy1,3,4,6, Vincenzo Carnevale2,3,*,
and Allan Haldane1,6,*
1Center for Biophysics and Computational Biology, Temple University, Philadelphia, 19122, USA
2Institute for Computational Molecular Science, Temple University, Philadelphia, 19122, USA
3Department of Biology, Temple University, Philadelphia, 19122, USA
4Department of Chemistry, Temple University, Philadelphia, 19122, USA
5Department of Computer & Information Sciences, Temple University, Philadelphia, 19122, USA
6Department of Physics, Temple University, Philadelphia, 19122, USA
*Corresponding authors: vincenzo.carnevale@temple.edu, allan.haldane@temple.edu
ABSTRACT
Potts models and variational autoencoders (VAEs) have recently gained popularity as generative protein sequence models
(GPSMs) to explore fitness landscapes and predict the effect of mutations. Despite encouraging results, quantitative characteri-
zation and comparison of GPSM-generated probability distributions is still lacking. It is currently unclear whether GPSMs can
faithfully reproduce the complex multi-residue mutation patterns observed in natural sequences arising due to epistasis. We
develop a set of sequence statistics to assess the “generative capacity” of three GPSMs of recent interest: the pairwise Potts
Hamiltonian, the VAE, and the site-independent model, using natural and synthetic datasets. We show that the generative
capacity of the Potts Hamiltonian model is the largest; the higher order mutational statistics generated by the model agree with
those observed for natural sequences. In contrast, we show that the VAE’s generative capacity lies between the pairwise Potts
and site-independent models. Importantly, our work measures GPSM generative capacity in terms of higher-order sequence
covariation statistics which we have developed, and provides a new framework for evaluating and interpreting GPSM accuracy
that emphasizes the role of epistasis.
Introduction
Recent progress in decoding the patterns of mutations in protein multiple sequence alignments (MSAs) has
highlighted the importance of mutational covariation in determining protein function, conformation and evolution,
and has found practical applications in protein design, drug design, drug resistance prediction, and classification1–3.
These developments were sparked by the recognition that the pairwise covariation of mutations observed in large
MSAs of evolutionarily diverged sequences belonging to a common protein family can be used to fit maximum
entropy “Potts” statistical models4–6. These contain pairwise statistical interaction parameters reflecting epistasis7
between pairs of positions. Such models have been shown to accurately predict physical contacts in protein
structure6,8–10, and have been used to significantly improve the prediction of the fitness effect of mutations to a
sequence compared to site-independent sequence variation models which do not account for covariation11,12.
They are “generative” in the sense that they define the probability, p(S), that a protein sequence Sresults from the
evolutionary process. Intriguingly, the probability distribution p(S)can be used to sample unobserved, and yet viable,
artificial sequences. In practice, the model distribution p(S)depends on parameters that are found by maximizing a
suitably defined likelihood function on observations provided by the MSA of a target protein family. As long as the
model is well specified and generalizes from the training MSA, it can then be used to generate new sequences, and
thus a new MSA whose statistics should match those of the original target protein family. We refer to probabilistic
models that create new protein sequences in this way as generative protein sequence models (GPSMs).
The fact that Potts maximum entropy models are limited to pairwise epistatic interaction terms and have a simple
functional form for p(S)raises the possibility that their functional form is not flexible enough to describe the data,
i.e. that the model is not well specified. While a model with only pairwise interaction terms can predict complex
patterns of covariation involving three or more positions through chains of pairwise interactions, it cannot model
certain triplet and higher patterns of covariation that require a model with more than pairwise interaction terms13.
For example, a Potts model cannot predict patterns described by an XOR or boolean parity function in which the
1arXiv:2012.02296v2 [cs.LG] 15 Mar 2021 |
2401.00368.pdf | Improving Text Embeddings with
Large Language Models
Liang Wang∗, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei
Microsoft Corporation
https://aka.ms/GeneralAI
Abstract
In this paper, we introduce a novel and simple method for obtaining high-quality
text embeddings using only synthetic data and less than 1k training steps. Unlike
existing methods that often depend on multi-stage intermediate pre-training with
billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled
datasets, our method does not require building complex training pipelines or relying
on manually collected datasets that are often constrained by task diversity and
language coverage. We leverage proprietary LLMs to generate diverse synthetic
data for hundreds of thousands of text embedding tasks across nearly 100languages.
We then fine-tune open-source decoder-only LLMs on the synthetic data using
standard contrastive loss. Experiments demonstrate that our method achieves
strong performance on highly competitive text embedding benchmarks without
using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic
and labeled data, our model sets new state-of-the-art results on the BEIR and
MTEB benchmarks.
1 Introduction
Text embeddings are vector representations of natural language that encode its semantic information.
They are widely used in various natural language processing (NLP) tasks, such as information
retrieval (IR), question answering, semantic textual similarity, bitext mining, item recommendation,
etc. In the field of IR, the first-stage retrieval often relies on text embeddings to efficiently recall
a small set of candidate documents from a large-scale corpus using approximate nearest neighbor
search techniques. Embedding-based retrieval is also a crucial component of retrieval-augmented
generation (RAG) [ 21], which is an emerging paradigm that enables large language models (LLMs)
to access dynamic external knowledge without modifying the model parameters. Source attribution
of generated text is another important application of text embeddings [ 14] that can improve the
interpretability and trustworthiness of LLMs.
Previous studies have demonstrated that weighted average of pre-trained word embeddings [ 35,1]
is a strong baseline for measuring semantic similarity. However, these methods fail to capture the
rich contextual information of natural language. With the advent of pre-trained language models
[11], Sentence-BERT [ 37] and SimCSE [ 13] have been proposed to learn text embeddings by fine-
tuning BERT on natural language inference (NLI) datasets. To further enhance the performance and
robustness of text embeddings, state-of-the-art methods like E5 [ 46] and BGE [ 48] employ a more
complex multi-stage training paradigm that first pre-trains on billions of weakly-supervised text pairs,
and then fine-tunes on several labeled datasets.
Existing multi-stage approaches suffer from several drawbacks. Firstly, they entail a complex
multi-stage training pipeline that demands substantial engineering efforts to curate large amounts
∗Correspondence to {wangliang,nanya,fuwei}@microsoft.com
Technical Report.arXiv:2401.00368v2 [cs.CL] 19 Jan 2024 |
More-Is-Different-Anderson.pdf | The reductionist hypothesis may still
lbe a topic for controversy among phi-
losophers, but among the great majority
of active scientists I think it is accepted
without question The workings of our
minds and bodles, and of all the ani-
mate or lnanimate matter of which we
have any detailed knowledges are as
sumed to be controlled by the same set
o£ fundamental laws which except
under certain extreme conditions we
feel we know pretty well.
It seems inevitable to go on unerit-
ically to what appears at first sight to
be- an obvious corollary of reduction
ism: that if everything obeys the same
fundamental laws, then the only sci
entists who are studying anything really
fundamental are those who are working
on those laws. In practice, that amounts
to some astrophysicists, some elemen-
tary particle physicists, some logicians
and other mathematicians, and few
others. This point of view, which it is
the main purpose of this article to
oppose, is expressed in a rather well-
known passage by Weisskopf (1):
Looking at the development of science
in thP Twentieth' Century one can dis
tinguish two trends, which I will call
sSintensive and "extensive" research, lack-
ing a better 'terminology. In short: in-
tensive research goes for the fundamental
laws, extensive research goes for the ex-
The author is a member of the technlical staff
of the Bell Telephone Laboratories, Murray Hill,
New Je1 sey 07974, and visiting professor of
theoretical physics at Cavendish Laboratory,
Cambridge, England. This article is an expanded
version of a Regents' Lecture given in 1967 at
the University of California, La Jolla.
4 AUGUST 1972 4 August 1972, Volume 177, Number 4047
less relevance they seem to have to the
very real problems of the rest of sci-
ence, much less to those of society.
The constructionist hypothesis breaks
down when confronted with the twin
difficulties of scale and complexity. The
behavior of large and complex aggre-
gates of elementary particles, it turns
out, is not to be understood in terms
of a simple extrapolation of the prop-
erties of a few particles. Instead, at
each level of complexity entirely new
properties appear, and the understand-
ing of the new behaviors requires re-
search which I think is as fundamental
in its nature as any other. That is, it
seems to me that one may array the
sciences roughly linearly in a hierarchy,
according to the idea The elementary
entities of science X obey the laws of
science Y planatlon - of phenomena ;n terms of
lnown fundamental laws. As always, dis-
tinotions of this kind are not unambiguous,
but they are clear in most cases. Solid
state physics, plasma physics, and perhaps
also biology are extensivee High energy
physics and a good part of nuclear physics
are intensive. There is always much less
intensive research going on than extensive.
Once new fundamental laws are discov-
ereds a large and ever increasing activity
begins in order to apply the discoveries to
hitherto unexplained phenomena. Thus,
there are two dimensions to basic re-
search The frontier of science extends all
along a long line from the newest and most
modern intenslve research5 over the ex-
tensive research recently spawned by the
intensive research of yesterday, to the
broad and well developed web of exten-
sive research activities based on mtensive
research of past decades.
The effectiveness of this message may
be indicated by the fact that I heard it
quoted recently by a leader in the field
of materials science, who urged the
participants at a meeting dedicated to
"fundamental problems in condensed
matter physics" to accept that there
were few or no such problems and that
nothing was left but extensive scienceS
which he seemed to equate with device
. @
englneerlng.
The main fallacy in this kind of
thinking is that the reductionist hypoth-
esis does not by any rneans imply a
"constructionist" one: The ability to
reduce everything to simple fundamen-
tal laws does not imply the ability to
start from those laws and reconstruct
the universe. In fact, the more the ele--
mentary particle physicists tell us about
the nature of the fundamental laws the Xsolid state or
many-body physics
chemistry
mo-lecular biology
cell biology
.
. *
psychology
* . . soclal sclences y
elementary particle
physics
many-body physics
chemistry
molecular biology
*
physlology
psychology
But this hierarchy does not imply
that science X is "just applied Y*" At
each stage entirely new laws, concepts,
and generalizations are necessary, re-
qulring inspiration and creativity to just
as great a degree as in the previous one.
Psychology is not applied biology, nor
s biology applied chemistry.
In my own field of many-body physB
ics, we are, perhaps, closer to our fun
damental, intensive underpinnings than
in any other science in which non-
trivial complexities occur, and as a re-
sult we have begun to formulate a
general theory of just how this shift
from quantitative to qualitative differ-
entiation takes place. This formulation,
called the theory of "broken sym-
metry," may be of help in making more
generally clear the breakdown of the
constructionist converse of reduction-
ism. I will give an elementary and in
complete explanation of these ideas, and
then go on to some more general spec-
ulative comments about analogies at
393 SCIE:NC1S
More Is Different
Broken symmetry and the nature of
the hierarchical structure of science
P. W. Anderson |
2002.11557v1.pdf | Query-Efficient Correlation Clustering
David García–Soriano
d.garcia.soriano@isi.it
ISI Foundation
Turin, ItalyKonstantin Kutzkov
kutzkov@gmail.com
Amalfi Analytics
Barcelona, Spain
Francesco Bonchi
francesco.bonchi@isi.it
ISI Foundation, Turin, Italy
Eurecat, Barcelona, SpainCharalampos Tsourakakis
ctsourak@bu.edu
Boston University
USA
ABSTRACT
Correlation clustering is arguably the most natural formulation of
clustering. Given nobjects and a pairwise similarity measure, the
goal is to cluster the objects so that, to the best possible extent,
similar objects are put in the same cluster and dissimilar objects
are put in different clusters.
A main drawback of correlation clustering is that it requires
as input the Θ(n2)pairwise similarities. This is often infeasible
to compute or even just to store. In this paper we study query-
efficient algorithms for correlation clustering. Specifically, we devise
a correlation clustering algorithm that, given a budget of Qqueries,
attains a solution whose expected number of disagreements is at
most 3·OPT+O(n3
Q), where OPT is the optimal cost for the instance.
Its running time is O(Q), and can be easily made non-adaptive
(meaning it can specify all its queries at the outset and make them
in parallel) with the same guarantees. Up to constant factors, our
algorithm yields a provably optimal trade-off between the number
of queries Qand the worst-case error attained, even for adaptive
algorithms.
Finally, we perform an experimental study of our proposed
method on both synthetic and real data, showing the scalability
and the accuracy of our algorithm.
CCS CONCEPTS
•Theory of computation →Graph algorithms analysis ;Fa-
cility location and clustering ;Active learning ;
KEYWORDS
correlation clustering, active learning, query complexity, algorithm
design
ACM Reference Format:
David García–Soriano, Konstantin Kutzkov, Francesco Bonchi, and Char-
alampos Tsourakakis. 2020. Query-Efficient Correlation Clustering. In Pro-
ceedings of The Web Conference 2020 (WWW ’20), April 20–24, 2020, Taipei,
Taiwan. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3366423.
3380220
This paper is published under the Creative Commons Attribution 4.0 International
(CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their
personal and corporate Web sites with the appropriate attribution.
WWW ’20, April 20–24, 2020, Taipei, Taiwan
©2020 IW3C2 (International World Wide Web Conference Committee), published
under Creative Commons CC-BY 4.0 License.
ACM ISBN 978-1-4503-7023-3/20/04.
https://doi.org/10.1145/3366423.33802201 INTRODUCTION
Correlation clustering [3] (or cluster editing ) is a prominent cluster-
ing framework where we are given a set V=[n]and a symmetric
pairwise similarity function sim: V
2→{0,1}, where V
2is the
set of unordered pairs of elements of V. The goal is to cluster the
items in such a way that, to the best possible extent, similar ob-
jects are put in the same cluster and dissimilar objects are put in
different clusters. Assuming that cluster identifiers are represented
by natural numbers, a clustering ℓis a function ℓ:V→N, and
each cluster is a maximal set of vertices sharing the same label.
Correlation clustering aims at minimizing the following cost:
cost(ℓ)=Õ
(x,y)∈(V
2),
ℓ(x)=ℓ(y)(1−sim(x,y))+Õ
(x,y)∈(V
2),
ℓ(x),ℓ(y)sim(x,y).(1)
The intuition underlying the above problem definition is that
if two objects xandyare dissimilar and are assigned to the same
cluster we should pay a cost of 1, i.e., the amount of their dissimi-
larity. Similarly, if x,yare similar and they are assigned to different
clusters we should pay also cost 1, i.e., the amount of their similarity
sim(x,y). The correlation clustering framework naturally extends
to non-binary, symmetric function, i.e., sim: V
2→[0,1]. In this
paper we focus on the binary case; the general non-binary case
can be efficiently reduced to this case at a loss of only a constant
factor in the approximation [ 3, Thm. 23]. The binary setting can
be viewed very conveniently through graph-theoretic lenses: the n
items correspond to the vertices of a similarity graph G, which is a
complete undirected graph with edges labeled “+” or “-”. An edge e
causes a disagreement (ofcost1) between the similarity graph and
a clustering when it is a “+” edge connecting vertices in different
clusters, or a “–” edge connecting vertices within the same cluster. If
we were given a cluster graph [22], i.e., a graph whose set of positive
edges is the union of vertex-disjoint cliques, we would be able to
produce a perfect (i.e., cost 0) clustering simply by computing the
connected components of the positive graph. However, similarities
will generally be inconsistent with one another, so incurring a cer-
tain cost is unavoidable. Correlation clustering aims at minimizing
such cost. The problem may be viewed as the task of finding the
equivalence relation that most closely resembles a given symmetric
relation. The correlation clustering problem is NP-hard [3, 22].arXiv:2002.11557v1 [cs.DS] 26 Feb 2020 |
10.1093.gbe.evad084.pdf | Unsupervised Deep Learning Can Identify Protein
Functional Groups from Unaligned Sequences
Kyle T. David
1,* and Kenneth M. Halanych
2
1Department of Biological Sciences, Auburn University, Auburn, Alabama, USA
2Center for Marine Sciences, University of North Carolina Wilmington, Wilmington, North Carolina, USA
*Corresponding author: E-mail: kzd0038@auburn.edu .
Accepted: 13 May 2023
Abstract
Interpreting protein function from sequence data is a fundamental goal of bioinformatics. However, our current understand -
ing of protein diversity is bottlenecked by the fact that most proteins have only been functionally validated in model organ -
isms, limiting our understanding of how function varies with gene sequence diversity. Thus, accuracy of inferences in clades
without model representatives is questionable. Unsupervised learning may help to ameliorate this bias by identifying highly
complex patterns and structure from large data sets without external labels. Here, we present DeepSeqProt, an unsupervised
deep learning program for exploring large protein sequence data sets. DeepSeqProt is a clustering tool capable of distinguish -
ing between broad classes of proteins while learning local and global structure of functional space. DeepSeqProt is capable of
learning salient biological features from unaligned, unannotated sequences. DeepSeqProt is more likely to capture complete
protein families and statistically significant shared ontologies within proteomes than other clustering methods. We hope this
framework will prove of use to researchers and provide a preliminary step in further developing unsupervised deep learning in
molecular biology.
Key words: machine learning, protein annotation, bioinformatics.
Introduction
As sequencing technology continues to improve, there is an
ever-increasing need to adequately annotate and charac -
terize novel protein sequences and their predicted func-
tions. With thousands of new sequences being uploaded
every day, predicting the function of every protein directly with conventional experimental studies such as gene
knockouts or assays is not possible. Thus, attempting to in-
fer protein function automatically is necessary. Many such
methods exist but fundamentally operate the same way:
by matching the sequence of a protein with unknown func-
tion to a reference sequence of a protein with known func-
tion and then assuming that functions are the same. These Significance
In this manuscript, we report the results of a new unsupervised machine learning software, DeepSeqProt. Unsupervised
methods offer several advantages which can help escape longstanding pitfalls and biases pervading computational mo-
lecular biology. DeepSeqProt learns from and processes unaligned protein sequences with the goal of clustering them
into informative groups with regard to protein family and function, as well as distributing the clusters themselves in a
lower dimension space. We discovered that unsupervised deep learning is capable of recognizing patterns shared
among proteins of similar families and functional affinities, exceeding conventional sequence similarity-based clustering
in some scenarios. DeepSeqProt has broad applications for computational molecular biology and may be especially use-
ful for nonmodel organisms.
© The Author(s) 2023. Published by Oxford University Press on behalf of Society for Molecular Biology and Evolution.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/ ), which permits
non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.comGBE
Genome Biol. Evol. 15(5) https://doi.org/10.1093/gbe/evad084 Advance Access publication 22 May 2023 1Downloaded from https://academic.oup.com/gbe/article/15/5/evad084/7175204 by guest on 19 February 2024
|
10.1038.s41467-024-46631-y.pdf | Article https://doi.org/10.1038/s41467-024-46631-y
Alignment of brain embeddings and arti ficial
contextual embeddings in natural languagepoints to common geometric patterns
Ariel Goldstein1,2, Avigail Grinstein-Dabush2,8, Mariano Schain2,8,
Haocheng Wang3, Zhuoqiao Hong3, Bobbi Aubrey3,4, Mariano Schain2,
Samuel A. Nastase3,Z a i dZ a d a3,E r i cH a m3, Amir Feder2,
Harshvardhan Gazula3, Eliav Buchnik2, Werner Doyle4,S a s h aD e v o r e4,
Patricia Dugan4, Roi Reichart5,D a n i e lF r i e d m a n4, Michael Brenner2,6,
Avinatan Hassidim2, Orrin Devinsky4, Adeen Flinker4,7&U r iH a s s o n2,3
Contextual embeddings, derived from deep language models (DLMs), provide
a continuous vectorial representati on of language. This embedding space
differs fundamentally from the symbolic representations posited by tradi-tional psycholinguistics. We hypoth esize that language areas in the human
brain, similar to DLMs, rely on a con tinuous embedding space to represent
language. To test this hypothesis, we densely record the neural activity pat-terns in the inferior frontal gyrus ( IFG) of three participants using dense
intracranial arrays while they listen ed to a 30-minute podcast. From these fine-
grained spatiotemporal neural recordi ngs, we derive a continuous vectorial
representation for each word (i.e., a br a i ne m b e d d i n g )i ne a c hp a t i e n t .U s i n g
stringent zero-shot mapping we demonstrate that brain embeddings in the IFGand the DLM contextual embedding sp ace have common geometric patterns.
The common geometric patterns allow us to predict the brain embedding in
IFG of a given left-out word based sole ly on its geometrical relationship to
other non-overlapping words in the podcast. Furthermore, we show that
contextual embeddings ca pture the geometry of IFG embeddings better than
static word embeddings. The continu ous brain embedding space exposes a
vector-based neural code for natural la nguage processing in the human brain.
Deep language models (DLMs) trained on massive corpora of natural
text provide a radically different framework for how language isrepresented in the brain. The recent success of DLMs in modelingnatural language can be traced to the gradual development of threefoundational ideas in computational linguistics.Thefirst key innovation was to (1) embed words in continuous
vector space: Traditionally, words in language were viewed as discretesymbolic units in a lexicon
1,2. Early work in distributional semantics
demonstrated that the meaning of words could instead be capturedby geometric relationships in a continuous vector space based onReceived: 24 July 2022
Accepted: 4 March 2024
Check for updates
1Business School, Data Science department and Cognitive Department, Hebrew University, Jerusalem, Israel.2Google Research, Tel Aviv, Israel.3Department
of Psychology and the Neuroscience Institute, Princeton University, Princeton, NJ, USA.4New York University Grossman School of Medicine, New York, NY,
USA.5Faculty of Industrial Engineering and Management, Technion, Israel Institute of Technology, Haifa, Israel.6School of Engineering and Applied Science,
Harvard University, Cambridge, MA, USA.7New York University Tandon School of Engineering, Brooklyn, NY, USA.8These authors contributed equally: Avigail
Grinstein-Dabush, Mariano Schain. e-mail: ariel.y.goldstein@mail.huji.ac.il
Nature Communications | (2024) 15:2768 11234567890():,;
1234567890():,; |
2311.17932.pdf | Generating Molecular Conformer Fields
Yuyang Wang1Ahmed A. Elhag1Navdeep Jaitly1Joshua M. Susskind1Miguel Angel Bautista1
Abstract
In this paper we tackle the problem of generat-
ing conformers of a molecule in 3D space given
its molecular graph. We parameterize these con-
formers as continuous functions that map ele-
ments from the molecular graph to points in 3D
space. We then formulate the problem of learn-
ing to generate conformers as learning a distribu-
tion over these functions using a diffusion gener-
ative model, called Molecular Conformer Fields
(MCF ). Our approach is simple and scalable, and
achieves state-of-the-art performance on challeng-
ing molecular conformer generation benchmarks
while making no assumptions about the explicit
structure of molecules ( e.g. modeling torsional
angles). MCF represents an advance in extend-
ing diffusion models to handle complex scientific
problems in a conceptually simple, scalable and
effective manner.
1. Introduction
In this paper we tackle the problem of Molecular Conformer
Generation, i.e. predicting the diverse low-energy three-
dimensional conformers of molecules, relying solely on
their molecular graphs as illustrated in Fig. 1. Molecular
Conformer Generation is a fundamental problem in compu-
tational drug discovery and chemo-informatics, where un-
derstanding the intricate interactions between molecular and
protein structures in 3D space is critical, affecting aspects
such as charge distribution, potential energy, etc. (Batzner
et al., 2022). The core challenge associated with conformer
generation springs from the vast complexity of the 3D struc-
ture space, encompassing factors such as bond lengths and
torsional angles. Despite the molecular graph dictating po-
tential 3D conformers through specific constraints, such as
bond types and spatial arrangements determined by chiral
centers, the conformational space experiences exponential
growth with the expansion of the graph size and the number
of rotatable bonds (Axelrod & Gomez-Bombarelli, 2022).
1Apple. {yuyang wang4, aa elhag, jsusskind, njaitly,
mbautistamartin }@apple.com.
Preprint. Under review.This complicates brute force approaches, making them vir-
tually unfeasible for even moderately small molecules.
Systematic methods, like OMEGA (Hawkins et al., 2010),
offer rapid processing through rule-based generators and
curated torsion templates. Despite their efficiency, these
models typically fail on complex molecules, as they of-
ten overlook global interactions and are tricky to extend to
inputs like transition states or open-shell molecules. Clas-
sic stochastic methods, like molecular dynamics (MD) and
Markov chain Monte Carlo (MCMC), rely on extensively ex-
ploring the energy landscape to find low-energy conformers.
Such techniques suffer from sampling inefficiency for large
molecules and struggle to generate diverse representative
conformers (Hawkins, 2017; Wilson et al., 1991; Grebner
et al., 2011). In the domain of learning-based approaches,
several works have looked at conformer generation prob-
lems through the lens of probabilistic modeling, using either
normalizing flows (Xu et al., 2021a) or diffusion models
(Xu et al., 2022; Jing et al., 2022). These approaches tend
to use equivariant network architectures to deal with molec-
ular graphs (Xu et al., 2022) or model domain-specific fac-
tors like torsional angles (Ganea et al., 2021; Jing et al.,
2022). However, explicitly enforcing these domain-specific
inductive biases can sometimes come at a cost.For exam-
ple, Torsional Diffusion relies on rule-based methods to
find rotatable bonds which may fail especially for complex
molecules. Also, the quality of generated conformers are
adhered to the non-differentiable cheminformatic methods
used to predict local substructures. On the other hand, re-
cent works have proposed domain-agnostic approaches for
generative modeling of data in function space (Du et al.,
2021; Dupont et al., 2022b;a; Zhuang et al., 2023) obtaining
great performance. As an example, in (Zhuang et al., 2023)
the authors use a diffusion model to learn a distribution over
fields f, showing great results on different data domains
like images ( i.e.f:R2→R3) or 3D geometry ( i.e.
f:R3→R1), where the domain of the function Rnis
fixed across functions. However, dealing with fields defined
on different domains ( e.g. different molecular graphs, as
in molecular conformer generation) still remains an open
problem.
To address these issues, we present Molecular Conformer
Fields ( MCF ), an approach to learn generative models
of molecular conformers. We interpret conformers as
1arXiv:2311.17932v2 [physics.chem-ph] 5 Dec 2023 |
2305.15076.pdf | Meta-Learning Online Adaptation of Language Models
Nathan Hu* Eric Mitchell*
Christopher D. Manning Chelsea Finn
Stanford University
Abstract
Large language models encode impressively
broad world knowledge in their parameters.
However, the knowledge in static language
models falls out of date, limiting the model’s
effective “shelf life.” While online fine-tuning
can reduce this degradation, we find that
naively fine-tuning on a stream of documents
leads to a low level of information uptake.
We hypothesize that online fine-tuning does
not sufficiently attend to important informa-
tion. That is, the gradient signal from impor-
tant tokens representing factual information
is drowned out by the gradient from inher-
ently noisy tokens, suggesting that a dynamic,
context-aware learning rate may be beneficial.
We therefore propose learning which tokens to
upweight. We meta-train a small, autoregres-
sive model to reweight the language modeling
loss for each token during online fine-tuning,
with the objective of maximizing the out-of-
date base question-answering model’s ability
to answer questions about a document after
a single weighted gradient step. We call this
approach Context- aware Meta-learned Loss
Scaling (CaMeLS). Across three different dis-
tributions of documents, our experiments find
that CaMeLS provides substantially improved
information uptake on streams of thousands of
documents compared with standard fine-tuning
and baseline heuristics for reweighting token
losses.
1 Introduction
Large language models learn impressively broad
world knowledge through large-scale unsupervised
pre-training, which they can leverage for a wide
variety of downstream tasks (Brown et al., 2020;
Chowdhery et al., 2022; Bubeck et al., 2023). How-
ever, large language models are typically static ar-
tifacts, and as the world changes, the knowledge
encoded in their parameters becomes stale. While
* Equal contribution. Correspondence to zixia314@
stanford.edu ,eric.mitchell@cs.stanford.edu .
Figure 1: The proposed method CaMeLS learns to rescale the
per-token online loss, sparsifying the fine-tuning gradients to
emphasize informative timesteps. The middle row shows the
weights output by CaMeLS. The topandbottom rows show
raw and weighted per-token gradient norms, respectively.
retrieval-augmented models are one approach to
mitigating the staleness issue, even very large lan-
guage models often fail to correctly update their
memorized predictions when presented with coun-
terfactual retrieved information (Longpre et al.,
2021; Li et al., 2022; Si et al., 2023). Moreover,
purely parametric language models are uniquely
suited for edge computing due to their compact
size (relative to a large retrieval index) and simplic-
ity of inference (Gerganov, 2023). Recent work
has thus considered variants of online fine-tuning
on a stream of documents to efficiently perform
direct updates to the knowledge inside of a large
language model (Lazaridou et al., 2021; Jang et al.,
2022).
Ideally, we could simply fine-tune a language
model on an online stream of documents, and the
information contained in those documents would
be readily available for the model to use in a variety
of downstream tasks, such as answering questions
about the information in the documents. Unfortu-
nately, we find that in this online adaptation setting,
fine-tuning with a well-tuned learning rate leadsarXiv:2305.15076v2 [cs.CL] 20 Oct 2023 |
2102.03902.pdf | Nystr ¨omformer: A Nystr ¨om-based Algorithm for Approximating Self-Attention
Yunyang Xiong1Zhanpeng Zeng1Rudrasis Chakraborty2Mingxing Tan3
Glenn Fung4Yin Li1Vikas Singh1
1University of Wisconsin-Madison2UC Berkeley3Google Brain4American Family Insurance
yxiong43@wisc.edu, zzeng38@wisc.edu, rudra@berkeley.edu, tanmingxing@google.com, gfung@amfam.com,
yin.li@wisc.edu, vsingh@biostat.wisc.edu
Abstract
Transformers have emerged as a powerful tool for a broad
range of natural language processing tasks. A key compo-
nent that drives the impressive performance of Transform-
ers is the self-attention mechanism that encodes the influence
or dependence of other tokens on each specific token. While
beneficial, the quadratic complexity of self-attention on the
input sequence length has limited its application to longer se-
quences – a topic being actively studied in the community. To
address this limitation, we propose Nystr ¨omformer – a model
that exhibits favorable scalability as a function of sequence
length. Our idea is based on adapting the Nystr ¨om method
to approximate standard self-attention with O(n)complex-
ity. The scalability of Nystr ¨omformer enables application to
longer sequences with thousands of tokens. We perform eval-
uations on multiple downstream tasks on the GLUE bench-
mark and IMDB reviews with standard sequence length, and
find that our Nystr ¨omformer performs comparably, or in a
few cases, even slightly better, than standard self-attention.
On longer sequence tasks in the Long Range Arena (LRA)
benchmark, Nystr ¨omformer performs favorably relative to
other efficient self-attention methods. Our code is available
at https://github.com/mlpen/Nystromformer.
Introduction
Transformer-based models, such as BERT (Devlin et al.
2019) and GPT-3 (Brown et al. 2020), have been very
successful in natural language processing (NLP), achiev-
ing state-of-the-art performance in machine translation
(Vaswani et al. 2017), natural language inference (Williams,
Nangia, and Bowman 2018), paraphrasing (Dolan and
Brockett 2005), text classification (Howard and Ruder
2018), question answering (Rajpurkar et al. 2016) and many
other NLP tasks (Peters et al. 2018; Radford et al. 2018).
A key feature of transformers is what is known as the self-
attention mechanism (Vaswani et al. 2017), where each to-
ken’s representation is computed from all other tokens. Self-
attention enables interactions of token pairs across the full
sequence and has been shown quite effective.
Despite the foregoing advantages, self-attention also turns
out to be a major efficiency bottleneck since it has a memory
and time complexity of O(n2)wherenis the length of an in-
put sequence. This leads to high memory and computational
Copyright © 2021, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.requirements for training large Transformer-based models.
For example, training a BERT-large model (Devlin et al.
2019) will need 4 months using a single Tesla V100 GPU
(equivalent to 4 days using a 4x4 TPU pod). Further, the
O(n2)complexity makes it prohibitively expensive to train
large Transformers with long sequences (e.g., n= 2048 ).
To address this challenge, several recent works have pro-
posed strategies that avoid incurring the quadratic cost when
dealing with longer input sequences. For example, (Dai
et al. 2019) suggests a trade-off between memory and com-
putational efficiency. The ideas described in (Child et al.
2019; Kitaev, Kaiser, and Levskaya 2019) decrease the self-
attention complexity to O(n√n)andO(nlogn)respec-
tively. In (Shen et al. 2018b; Katharopoulos et al. 2020;
Wang et al. 2020), self-attention complexity can be reduced
toO(n)with various approximation ideas, each with its own
strengths and limitations.
In this paper, we propose a O(n)approximation, both
in the sense of memory and time, for self-attention. Our
model, Nystr ¨omformer , scales linearly with the input se-
quence length n. This is achieved by leveraging the cele-
brated Nystr ¨om method, repurposed for approximating self-
attention. Specifically, our Nystr ¨omFormer algorithm makes
use of landmark (or Nystr ¨om) points to reconstruct the soft-
max matrix in self-attention, thereby avoiding computing the
n×nsoftmax matrix. We show that this yields a good ap-
proximation of the true self-attention.
To evaluate our method, we consider a transfer learning
setting using Transformers, where models are first pretrained
with a language modeling objective on a large corpus, and
then finetuned on target tasks using supervised data (Devlin
et al. 2019; Liu et al. 2019; Lewis et al. 2020; Wang et al.
2020). Following BERT (Devlin et al. 2019; Liu et al. 2019),
we pretrain our proposed model on English Wikipedia and
BookCorpus (Zhu et al. 2015) using a masked-language-
modeling objective. We observe a similar performance to
the baseline BERT model on English Wikipedia and Book-
Corpus. We then finetune our pretrained models on multi-
ple downstream tasks in the GLUE benchmark (Wang et al.
2018) and IMDB reviews (Maas et al. 2011), and compare
our results to BERT in both accuracy and efficiency. Across
all tasks, our model compares favorably to the vanilla pre-
trained BERT with significant speedups.
Finally, we evaluate our model on tasks with longer se-arXiv:2102.03902v3 [cs.CL] 31 Mar 2021 |
2310.07820.pdf | Large Language Models Are
Zero-Shot Time Series Forecasters
Nate Gruver∗
NYUMarc Finzi∗
CMUShikai Qiu∗
NYUAndrew Gordon Wilson
NYU
Abstract
By encoding time series as a string of numerical digits, we can frame time series
forecasting as next-token prediction in text. Developing this approach, we find that
large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-
shot extrapolate time series at a level comparable to or exceeding the performance
of purpose-built time series models trained on the downstream tasks. To facilitate
this performance, we propose procedures for effectively tokenizing time series data
and converting discrete distributions over tokens into highly flexible densities over
continuous values. We argue the success of LLMs for time series stems from their
ability to naturally represent multimodal distributions, in conjunction with biases
for simplicity, and repetition, which align with the salient features in many time
series, such as repeated seasonal trends. We also show how LLMs can naturally
handle missing data without imputation through non-numerical text, accommodate
textual side information, and answer questions to help explain predictions. While
we find that increasing model size generally improves performance on time series,
we show GPT-4 can perform worse than GPT-3 because of how it tokenizes
numbers, and poor uncertainty calibration, which is likely the result of alignment
interventions such as RLHF.
1 Introduction
Despite similarities with other sequence modeling problems, such as text, audio, or video, time series
has two particularly challenging properties. Unlike video or audio, which typically have consistent
input scales and sampling rates, aggregated time series datasets often comprise sequences from
radically different sources, sometimes with missing values. Moreover, common applications of
time series forecasting, such as weather or financial data, require extrapolating from observations
that contain a tiny fraction of the possible information, making accurate point predictions nearly
impossible and uncertainty estimation especially important. While large-scale pretraining has become
a key element of training large neural networks in vision and text, enabling performance to scale
directly with data availability, pretraining is not typically used for time series modeling, where there is
no consensus unsupervised objective and large, cohesive pretraining datasets are not readily available.
Consequently, simple time series methods (e.g. ARIMA [ 8], and linear models [ 52]) often outperform
deep learning methods on popular benchmarks [24].
In this paper, we demonstrate how large language models (LLM) can naturally bridge the gap
between the simple biases of traditional methods and the complex representational learning and
generative abilities of modern deep learning. In particular, we introduce an exceedingly simple
method, LLMT IME2, to apply pretrained LLMs for continuous time series prediction problems,
illustrated at a high level in Figure 1. At its core, this method represents the time series as a string
of numerical digits, and views time series forecasting as next-token prediction in text, unlocking
∗Equal contribution
2https://github.com/ngruver/llmtime
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2310.07820v1 [cs.LG] 11 Oct 2023 |
2211.10438.pdf | SmoothQuant: Accurate and Efficient
Post-Training Quantization for Large Language Models
Guangxuan Xiao* 1Ji Lin* 1Mickael Seznec2Hao Wu2Julien Demouth2Song Han1
Abstract
Large language models (LLMs) show excel-
lent performance but are compute- and memory-
intensive. Quantization can reduce memory and
accelerate inference. However, for LLMs be-
yond 100 billion parameters, existing methods
cannot maintain accuracy or do not run effi-
ciently on hardware. We propose SmoothQuant,
a training-free, accuracy-preserving, and general-
purpose post-training quantization (PTQ) solution
to enable 8-bit weight, 8-bit activation (W8A8)
quantization for LLMs. Based on the fact that
weights are easy to quantize while activations are
not, SmoothQuant smooths the activation outliers
by offline migrating the quantization difficulty
from activations to weights with a mathemati-
cally equivalent transformation. SmoothQuant
enables an INT8 quantization of both weights
and activations for all the matrix multiplications
in LLMs, including OPT-175B, BLOOM-176B,
GLM-130B, and MT-NLG 530B. SmoothQuant
has better hardware efficiency than existing tech-
niques. We demonstrate up to 1.56 ×speedup
and 2×memory reduction for LLMs with negligi-
ble loss in accuracy. We integrate SmoothQuant
into FasterTransformer, a state-of-the-art LLM
serving framework, and achieve faster inference
speed with half the number of GPUs compared
to FP16, enabling the serving of a 530B LLM
within a single node. Our work offers a turn-
key solution that reduces hardware costs and de-
mocratizes LLMs. Code is available at https:
//github.com/mit-han-lab/smoothquant.
1 Introduction
Large-scale language models (LLMs) show excellent per-
formance on various tasks (Brown et al., 2020a; Zhang
et al., 2022). However, serving LLMs is budget and energy-
*Equal contribution1Massachusetts Institute of Technology
2NVIDIA. Correspondence to: Guangxuan Xiao <xgx@mit.edu>,
Ji Lin <jilin@mit.edu>.Table 1: SmoothQuant achieves high hardware efficiency
while maintaining the accuracy of LLMs with 530 billion
parameters in a training-free fashion.
LLM (100B+)
AccuracyHardware
Efficiency
ZeroQuant % "
Outlier Suppression % "
LLM.int8() " %
SmoothQuant " "
consuming due to their gigantic model size. For exam-
ple, the GPT-3 (Brown et al., 2020a) model contains 175B
parameters, which will consume at least 350GB of mem-
ory to store and run in FP16, requiring 8 ×48GB A6000
GPUs or 5×80GB A100 GPUs just for inference. Due to
the huge computation and communication overhead, the
inference latency may also be unacceptable to real-world
applications. Quantization is a promising way to reduce
the cost of LLMs (Dettmers et al., 2022; Yao et al., 2022).
By quantizing the weights and activations with low-bit in-
tegers, we can reduce GPU memory requirements, in size
and bandwidth, and accelerate compute-intensive operations
(i.e.,GEMM in linear layers, BMM in attention). For instance,
INT8 quantization of weights and activations can halve the
GPU memory usage and nearly double the throughput of
matrix multiplications compared to FP16.
However, unlike CNN models or smaller transformer mod-
els like BERT (Devlin et al., 2019), the activations of LLMs
are difficult to quantize. When we scale up LLMs beyond
6.7B parameters, systematic outliers with large magnitude
will emerge in activations (Dettmers et al., 2022), leading
to large quantization errors and accuracy degradation. Ze-
roQuant (Yao et al., 2022) applies dynamic per-token ac-
tivation quantization and group-wise weight quantization
(defined in Figure 2 Sec. 2). It can be implemented effi-
ciently and delivers good accuracy for GPT-3-350M and
GPT-J-6B. However, it can not maintain the accuracy for
the large OPT model with 175 billion parameters (see Sec-
tion 5.2). LLM.int8() (Dettmers et al., 2022) addresses
that accuracy issue by further introducing a mixed-precision
decomposition (i.e., it keeps outliers in FP16 and uses INT8arXiv:2211.10438v4 [cs.CL] 14 Feb 2023 |
2009.14794.pdf | Published as a conference paper at ICLR 2021
RETHINKING ATTENTION WITH PERFORMERS
Krzysztof Choromanski∗1, Valerii Likhosherstov∗2, David Dohan∗1, Xingyou Song∗1
Andreea Gane∗1, Tamas Sarlos∗1, Peter Hawkins∗1, Jared Davis∗3, Afroz Mohiuddin1
Lukasz Kaiser1, David Belanger1, Lucy Colwell1,2, Adrian Weller2,4
1Google2University of Cambridge3DeepMind4Alan Turing Institute
ABSTRACT
We introduce Performers , Transformer architectures which can estimate regular
(softmax) full-rank-attention Transformers with provable accuracy, but using only
linear (as opposed to quadratic) space and time complexity, without relying on
any priors such as sparsity or low-rankness. To approximate softmax attention-
kernels, Performers use a novel Fast Attention Via positive Orthogonal Random
features approach (FA VOR+), which may be of independent interest for scalable
kernel methods. FA VOR+ can also be used to efficiently model kernelizable
attention mechanisms beyond softmax. This representational power is crucial to
accurately compare softmax with other kernels for the first time on large-scale tasks,
beyond the reach of regular Transformers, and investigate optimal attention-kernels.
Performers are linear architectures fully compatible with regular Transformers
and with strong theoretical guarantees: unbiased or nearly-unbiased estimation
of the attention matrix, uniform convergence and low estimation variance. We
tested Performers on a rich set of tasks stretching from pixel-prediction through
text models to protein sequence modeling. We demonstrate competitive results
with other examined efficient sparse and dense attention methods, showcasing
effectiveness of the novel attention-learning paradigm leveraged by Performers.
1 I NTRODUCTION AND RELATED WORK
Transformers (Vaswani et al., 2017; Dehghani et al., 2019) are powerful neural network architectures
that have become SOTA in several areas of machine learning including natural language processing
(NLP) (e.g. speech recognition (Luo et al., 2020)), neural machine translation (NMT) (Chen et al.,
2018), document generation/summarization, time series prediction, generative modeling (e.g. image
generation (Parmar et al., 2018)), music generation (Huang et al., 2019), and bioinformatics (Rives
et al., 2019; Madani et al., 2020; Ingraham et al., 2019; Elnaggar et al., 2019; Du et al., 2020).
Transformers rely on a trainable attention mechanism that identifies complex dependencies between
the elements of each input sequence. Unfortunately, the regular Transformer scales quadratically
with the number of tokens Lin the input sequence, which is prohibitively expensive for large L
and precludes its usage in settings with limited computational resources even for moderate values
ofL. Several solutions have been proposed to address this issue (Beltagy et al., 2020; Gulati et al.,
2020; Chan et al., 2020; Child et al., 2019; Bello et al., 2019). Most approaches restrict the attention
mechanism to attend to local neighborhoods (Parmar et al., 2018) or incorporate structural priors
on attention such as sparsity (Child et al., 2019), pooling-based compression (Rae et al., 2020)
clustering/binning/convolution techniques (e.g. (Roy et al., 2020) which applies k-means clustering
to learn dynamic sparse attention regions, or (Kitaev et al., 2020), where locality sensitive hashing
is used to group together tokens of similar embeddings), sliding windows (Beltagy et al., 2020),
or truncated targeting (Chelba et al., 2020). There is also a long line of research on using dense
attention matrices, but defined by low-rank kernels substituting softmax (Katharopoulos et al., 2020;
Shen et al., 2018). Those methods critically rely on kernels admitting explicit representations as
dot-products of finite positive-feature vectors.
The approaches above do not aim to approximate regular attention, but rather propose simpler and
more tractable attention mechanisms, often by incorporating additional constraints (e.g. identical
query and key sets as in (Kitaev et al., 2020)), or by trading regular with sparse attention using more
∗Equal contribution. Correspondence to {kchoro,lcolwell}@google.com .
Code for Transformer models on protein data can be found in github.com/google-research/
google-research/tree/master/protein_lm and Performer code can be found in github.com/
google-research/google-research/tree/master/performer . Google AI Blog: https://
ai.googleblog.com/2020/10/rethinking-attention-with-performers.html
1arXiv:2009.14794v4 [cs.LG] 19 Nov 2022 |
2305.19466.pdf | The Impact of Positional Encoding on Length
Generalization in Transformers
Amirhossein Kazemnejad1,2, Inkit Padhi3
Karthikeyan Natesan Ramamurthy3,Payel Das3,Siva Reddy1,2,4
1Mila - Québec AI Institute;2McGill University;
3IBM Research;4Facebook CIFAR AI Chair
{amirhossein.kazemnejad,siva.reddy}@mila.quebec
inkpad@ibm.com ,{knatesa,daspa}@us.ibm.com
Abstract
Length generalization, the ability to generalize from small training context sizes
to larger ones, is a critical challenge in the development of Transformer-based
language models. Positional encoding (PE) has been identified as a major factor
influencing length generalization, but the exact impact of different PE schemes
on extrapolation in downstream tasks remains unclear. In this paper, we conduct
a systematic empirical study comparing the length generalization performance
of decoder-only Transformers with five different position encoding approaches
including Absolute Position Embedding (APE), T5’s Relative PE, ALiBi, and
Rotary, in addition to Transformers without positional encoding (NoPE). Our
evaluation encompasses a battery of reasoning and mathematical tasks. Our findings
reveal that the most commonly used positional encoding methods, such as ALiBi,
Rotary, and APE, are not well suited for length generalization in downstream
tasks. More importantly, NoPE outperforms other explicit positional encoding
methods while requiring no additional computation. We theoretically demonstrate
that NoPE can represent both absolute and relative PEs, but when trained with
SGD, it mostly resembles T5’s Relative PE attention patterns. Finally, we find
that scratchpad is not always helpful to solve length generalization and its format
highly impacts the model’s performance. Overall, our work suggests that explicit
position encodings are not essential for decoder-only Transformers to generalize
well to longer sequences.
1 Introduction
The ability to generalize from smaller training context sizes to larger ones, commonly known as
length generalization, is a major challenge for Transformer-based language models (Vaswani et al.,
2017; Deletang et al., 2023; Zhang et al., 2023). Even with larger Transformers, this issue persists
(Brown et al., 2020; Furrer et al., 2020). With larger context sizes, a model can benefit from
more in-context-learning examples, higher numbers of reasoning and planning steps, or longer text
generation. However, training a Transformer with a larger context size can be excessively slow and
memory-intensive. This is even more pronounced in the recent paradigm of model finetuning on
instruction-following datasets (Wei et al., 2022a; Chung et al., 2022; Ouyang et al., 2022). It is not
only infeasible to train the model on all possible context lengths, but also the number of training
examples drops dramatically as the sequence length increases requiring the model to generalize
from finite and shorter-length training examples. In this work, we focus on the effect of positional
encoding on length generalization in the “ decoder-only " Transformers on various tasks trained from
scratch. Figure 1 summarizes our finding that using no positional encoding is better than using
explicit positional encodings.
Preprint.arXiv:2305.19466v1 [cs.CL] 31 May 2023 |
10.1093.molbev.msx095.pdf | Inference of Epistatic Effects Leading to Entrenchment and
Drug Resistance in HIV-1 Protease
William F. Flynn,1,2Allan Haldane,2,3Bruce E. Torbett,4and Ronald M. Levy*,2,3
1Department of Physics and Astronomy, Ru tgers University, New Brunswick, NJ
2Center for Biophysics and Computational Bio logy, Temple University, Philadelphia, PA
3Department of Chemistry, Temple University, Philadelphia, PA
4Department of Molecular and Experimental Medicine, The Scripps Research Institute, La Jolla, CA
*Corresponding author: E-mail: ronlevy@temple.edu.
Associate editor: Tal Pupko
Abstract
Understanding the complex mutation patterns that give rise to drug resistant viral strains provides a foundation for
developing more effective treatment strategies for HIV/AID S. Multiple sequence alignments of drug-experienced HIV-1
protease sequences contain networks of many pair correlati ons which can be used to build a (Potts) Hamiltonian model
of these mutation patterns. Using this Hamiltonian model, we translate HIV-1 protease sequence covariation data into
quantitative predictions for the proba bility of observing specific mutation pa tterns which are in agreement with the
observed sequence statistics. We find that the statistical en ergies of the Potts model are c orrelated with the fitness of
individual proteins containing therapy-associated mutation s as estimated by in vitro measurements of protein stability
and viral infectivity. We show that the penalty for acquiri ng primary resistance mutations depends on the epistatic
interactions with the sequence background. Primary mutations which lead to drug resistance can become highly ad-
vantageous (or entrenched) by the complex mutation patterns which arise in response to drug therapy despite being
destabilizing in the wildtype background. Anticipating epistatic effects is important for the design of future proteaseinhibitor therapies.
Key words: epistasis, mutational landscape, statistical inference, coevolution, HIV, drug resistance.
Introduction
The ability of HIV-1 to rapidly mutate leads to antiretroviral
therapy (ART) failure among infected patients. Enzymes
coded by the polgene play critical roles in viral maturation
and have been key targets of several families of drugs used in
combination therapies. The protease enzyme is responsible
for the cleavage of the Gag and Gag-Pol polyproteins into
functional constituent proteins and it has been estimated
that resistance develops in as many as 50% of patientsundergoing monotherapy ( Richman et al. 2004 )a n da s
many as 30% of patients undergoing modern combination
antiretroviral therapy (c-ART) ( Gupta et al. 2008 ).
The combined selective pressures of the human immune
response and antiretroviral therapies greatly affect the evolu-
tion of targeted portions of the HIV-1 genome and give rise to
patterns of correlated amino acid substitutions. As an enzyme
responsible for the maturation of the virion, the mutational
landscape of HIV-1 protease is further constrained due to
function, structure, therm odynamics, and kinetics ( Lockless
et al. 1999 ;Zeldovich et al. 2007 ;Zeldovich and Shakhnovich
2008 ;Bloom et al. 2010 ;Haq et al. 2012 ) .A sac o n s e q u e n c eo f
these constraints, complex mutational patterns often arise in
patients who have failed c-ART therapies containing protease
inhibitors (PI), with mutations located both at critical residuepositions in or near the protease active site and others distal
f r o mt h ea c t i v es i t e( Chang and Torbett 2011 ;Fun et al. 2012 ;
Haq et al. 2012 ;Flynn et al. 2015 ). In particular, the selective
pressure of PI therapy gives rise to patterns of strongly corre-
lated mutations generally not observed in the absence of c-
ART, and more therapy-associated mutations accumulate
under PI therapy than under all other types of ART ( Wu
et al. 2003 ;Shafer 2006 ;Shafer and Schapiro 2008 ). In fact,
the majority of drug-experienced subtype B protease se-
quences in the Stanford HIV Drug Resistance Database
(HIVDB) have more than four PI-therapy-associated muta-
tions (see supplementary fig. S2 , Supplementary Material on-
line). Within the Stanford HIVDB are patterns of multiple
resistance mutations, and in order to overcome the develop-
ment of resistance, understanding these patterns is critical.
A mutation’s impact on protein stability or fitness depends
on the genetic background in which it is acquired. Geneticists
call this phenomenon “epistasis.” It is well understood that
major drug resistance mutations in HIV-1 protease destabilize
the protease in some way, reducing protein stability or en-
zymatic activity, which can greatly alter the replicative and
transmissive ability, or fitness , of that viral strain ( Wang et al.
2002 ;Grenfell et al. 2004 ;Bloom et al. 2010 ;Boucher et al.
2016 ). To compensate for this fitness loss, protease accumu-
lates accessory mutations which have been shown to restoreArticle Fast Track
/C223The Author 2017. Published by Oxford University Press on be half of the Society for Molecular Biology and Evolution.
This is an Open Access article distributed un der the terms of the Creative Commons Attrib ution License (http://creativecommons.
org/licenses/by/4.0/), which permits unrest ricted reuse, distribution, and reproduction in any medium, provided the original work is
properly cited. Open Access
Mol. Biol. Evol. 34(6):1291–1306 doi:10.1093/molbev/msx095 Advance Access publication March 20, 2017 1291Downloaded from https://academic.oup.com/mbe/article/34/6/1291/3056431 by guest on 13 March 2024
|
2202.01169.pdf | UNIFIED SCALING LAWS FOR ROUTED LANGUAGE MODELS
Aidan Clark∗, Diego de las Casas∗, Aurelia Guy∗, Arthur Mensch∗
Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman‡, Trevor Cai, Sebastian Borgeaud,
George van den Driessche, Eliza Rutherford, Tom Hennigan, Matthew Johnson‡, Katie Millican,
Albin Cassirer, Chris Jones, Elena Buchatskaya, David Budden, Laurent Sifre, Simon Osindero,
Oriol Vinyals, Jack Rae, Erich Elsen, Koray Kavukcuoglu, Karen Simonyan
DeepMind Google Research‡
ABSTRACT
The performance of a language model has been shown to be effectively modeled as a power-law in
its parameter count. Here we study the scaling behaviors of Routing Networks : architectures that
conditionally use only a subset of their parameters while processing an input. For these models,
parameter count and computational requirement form two independent axes along which an increase
leads to better performance. In this work we derive and justify scaling laws defined on these two
variables which generalize those known for standard language models and describe the performance
of a wide range of routing architectures trained via three different techniques. Afterwards we provide
two applications of these laws: first deriving an Effective Parameter Count along which all models
scale at the same rate, and then using the scaling coefficients to give a quantitative comparison of the
three routing techniques considered. Our analysis derives from an extensive evaluation of Routing
Networks across five orders of magnitude of size, including models with hundreds of experts and
hundreds of billions of parameters.
1 Introduction
It is a commonly held belief that increasing the size of a neural network leads to better performance, especially when
training on large and diverse real-world datasets. This vague and debated notion has become increasingly justified as
large empirical studies have shown that the performance of models on many interesting classes of problems are well
understood as power-laws; where a multiplicative increase in model size leads to an additive reduction in the model’s
loss [Kaplan et al., 2020, Hernandez et al., 2021, Henighan et al., 2020, Rosenfeld et al., 2019]. These relationships are
not well understood, but a key implication is that a sequence of small1models can be used both to infer the performance
of models many times more powerful, but also to provide global information about the scalability of an architecture.
Enter Routing Networks: models with the unusual property that each input interacts with only a subset of the network’s
parameters — chosen independently for each datapoint [Bengio et al., 2016, 2013, Denoyer and Gallinari, 2014]. For
a Routing Network, the number of parameters is nearly independent from the computational cost of processing a
datapoint. This bifurcates the definition of size and prevents a scaling law in parameters alone from fully describing the
model class. Specific Routing Networks have been trained successfully at large scales [Fedus et al., 2021, Du et al.,
2021, Artetxe et al., 2021], but the general scaling behavior is not well understood. In this work we analyze the behavior
of routed language models so that we might infer the scaling laws that describe their performance.
Correspondence to aidan.b.clark@gmail.com, diegolascasas@deepmind.com. All affiliation to DeepMind unless noted.
*Shared first authorship.
1Measured as training or inference floating point operations, devices or time required, financial cost, carbon emissions, etc.arXiv:2202.01169v2 [cs.CL] 9 Feb 2022 |
2304.10970.pdf | Can GPT-4 Perform Neural Architecture Search?
Mingkai Zheng1,3Xiu Su1Shan You2Fei Wang2
Chen Qian2Chang Xu1Samuel Albanie3
1The University of Sydney2SenseTime Research3CAML Lab, University of Cambridge
mingkaizheng@outlook.com ,xisu5992@uni.sydney.edu.au,
{youshan,wangfei,qianchen}@sensetime.com ,c.xu@sydney.edu.au
samuel.albanie.academic@gmail.com
Abstract
We investigate the potential of GPT-4 [ 52] to perform Neural Architecture Search
(NAS)—the task of designing effective neural architectures. Our proposed ap-
proach, GPT-4 Enhanced Neural arch ItectUreSearch (GENIUS), leverages the
generative capabilities of GPT-4 as a black-box optimiser to quickly navigate the
architecture search space, pinpoint promising candidates, and iteratively refine
these candidates to improve performance. We assess GENIUS across several bench-
marks, comparing it with existing state-of-the-art NAS techniques to illustrate its
effectiveness. Rather than targeting state-of-the-art performance, our objective is
to highlight GPT-4’s potential to assist research on a challenging technical problem
through a simple prompting scheme that requires relatively limited domain exper-
tise.1. More broadly, we believe our preliminary results point to future research that
harnesses general purpose language models for diverse optimisation tasks. We also
highlight important limitations to our study, and note implications for AI safety.
1 Introduction
Recent years have witnessed a string of high-profile scientific breakthroughs by applying deep neural
networks to problems spanning domains such as protein folding [ 38], exoplanet detection [ 59] and
drug discovery [ 61]. To date, however, successful applications of AI have been marked by the
effective use of domain expertise to guide the design of the system, training data and development
methodology.
The recent release of GPT-4 represents a milestone in the development of “general purpose” systems
that exhibit a broad range of capabilities. While the full extent of these capabilities remains unknown,
preliminary studies and simulated human examinations indicate that the model’s knowledge spans
many scientific domains [ 52,6]. It is therefore of interest to consider the potential for GPT-4 to serve
as a general-purpose research tool that substantially reduces the need for domain expertise prevalent
in previous breakthroughs.
In this work, we investigate the feasibility of using GPT-4 without domain-specific fine-tuning to assist
with a research task that has received considerable attention in the machine learning community: deep
neural network design. Deep neural networks have proven effective on a diverse array of language
and perception tasks, spanning domains such as question answering [ 56], object recognition [ 16,40]
and object detection [ 19,46]. In the quest to improve performance, novel neural architecture designs,
exemplified by proposals such as ResNets [ 23] and Transformers [ 71], have attained substantial
gains in performance. Consequently, there has been significant interest in developing techniques
that yield further improvements to neural network architectures. In particular, Neural Architecture
1Code available at https://github.com/mingkai-zheng/GENIUS.
Preprint. Under review.arXiv:2304.10970v4 [cs.LG] 2 Aug 2023 |
2205.11487.pdf | Photorealistic Text-to-Image Diffusion Models
with Deep Language Understanding
Chitwan Saharia∗, William Chan∗, Saurabh Saxena†, Lala Li†, Jay Whang†,
Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan,
S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans,
Jonathan Ho†, David J Fleet†, Mohammad Norouzi∗
{sahariac,williamchan,mnorouzi}@google.com
{srbs,lala,jwhang,jonathanho,davidfleet}@google.com
Google Research, Brain Team
Toronto, Ontario, Canada
Abstract
We present Imagen, a text-to-image diffusion model with an unprecedented degree
of photorealism and a deep level of language understanding. Imagen builds on
the power of large transformer language models in understanding text and hinges
on the strength of diffusion models in high-fidelity image generation. Our key
discovery is that generic large language models (e.g. T5), pretrained on text-only
corpora, are surprisingly effective at encoding text for image synthesis: increasing
the size of the language model in Imagen boosts both sample fidelity and image-
text alignment much more than increasing the size of the image diffusion model.
Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset,
without ever training on COCO, and human raters find Imagen samples to be on par
with the COCO data itself in image-text alignment. To assess text-to-image models
in greater depth, we introduce DrawBench, a comprehensive and challenging
benchmark for text-to-image models. With DrawBench, we compare Imagen with
recent methods including VQ-GAN+CLIP, Latent Diffusion Models, GLIDE and
DALL-E 2, and find that human raters prefer Imagen over other models in side-by-
side comparisons, both in terms of sample quality and image-text alignment. See
imagen.research.google for an overview of the results.
1 Introduction
Multimodal learning has come into prominence recently, with text-to-image synthesis [ 53,12,57]
and image-text contrastive learning [ 49,31,74] at the forefront. These models have transformed
the research community and captured widespread public attention with creative image generation
[22,54] and editing applications [ 21,41,34]. To pursue this research direction further, we introduce
Imagen, a text-to-image diffusion model that combines the power of transformer language models
(LMs) [ 15,52] with high-fidelity diffusion models [ 28,29,16,41] to deliver an unprecedented
degree of photorealism and a deep level of language understanding in text-to-image synthesis. In
contrast to prior work that uses only image-text data for model training [e.g., 53,41], the key finding
behind Imagen is that text embeddings from large LMs [ 52,15], pretrained on text-only corpora, are
remarkably effective for text-to-image synthesis. See Fig. 1 for select samples.
Imagen comprises a frozen T5-XXL [ 52] encoder to map input text into a sequence of embeddings
and a 64×64image diffusion model, followed by two super-resolution diffusion models for generating
∗Equal contribution.
†Core contribution.arXiv:2205.11487v1 [cs.CV] 23 May 2022 |
2310.08118.pdf | Can Large Language Models Really Improve by
Self-critiquing Their Own Plans?
Karthik Valmeekam∗
School of Computing & AI
Arizona State University Tempe.
kvalmeek@asu.eduMatthew Marquez∗
School of Computing & AI
Arizona State University, Tempe.
mmarqu22@asu.edu
Subbarao Kambhampati
School of Computing & AI
Arizona State University, Tempe.
rao@asu.edu
Abstract
There have been widespread claims about Large Language Models (LLMs) being
able to successfully verify or self-critique their candidate solutions in reasoning
problems in an iterative mode. Intrigued by those claims, in this paper we set out
to investigate the verification/self-critiquing abilities of large language models in
the context of planning. We evaluate a planning system that employs LLMs for
both plan generation and verification. We assess the verifier LLM’s performance
against ground-truth verification, the impact of self-critiquing on plan generation,
and the influence of varying feedback levels on system performance. Using GPT-4,
a state-of-the-art LLM, for both generation and verification, our findings reveal that
self-critiquing appears to diminish plan generation performance, especially when
compared to systems with external, sound verifiers and the LLM verifiers in that
system produce a notable number of false positives, compromising the system’s
reliability. Additionally, the nature of feedback, whether binary or detailed, showed
minimal impact on plan generation. Collectively, our results cast doubt on the
effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
1 Introduction
Large Language Models have rapidly captured the attention of the AI research community with their
exceptional natural language completion capabilities. Trained on web-scale language corpora, these
models have demonstrated the ability to generate seemingly valuable completions across a wide
range of topics. This led to a surge of interest in determining whether such models were able to
perform well on reasoning tasks. Even though initial anecdotal results showed promise, systematic
studies revealed their incompetency in reasoning – be it planning [ 12] or in simple arithmetic or
logic [ 3]. These results questioning the robustness of their reasoning abilities led to researchers
exploring ways to improve these systems. Of particular interest to us is the emerging research on
self-critiquing, where the LLMs are used to critique their own candidate generations and iterate.
The current works [ 15,10,14] exhibit considerable optimism about using LLMs to critique their
own candidate generations, especially in an iterative setting where they keep refining their candidate
generations. Additionally, the notion that verifying correctness is computationally simpler than
generation for reasoning adds to the optimism. However, there are grounds to be skeptical about it as
∗Equal Contribution
Preprint. Under Review.arXiv:2310.08118v1 [cs.AI] 12 Oct 2023 |
Bradley-RankAnalysisIncomplete-1952.pdf | Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons
Author(s): Ralph Allan Bradley and Milton E. Terry
Source: Biometrika , Dec., 1952 , Vol. 39, No. 3/4 (Dec., 1952), pp. 324-345
Published by: Oxford University Press on behalf of Biometrika Trust
Stable URL: http://www.jstor.com/stable/2334029
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Oxford University Press and are collaborating with JSTOR to digitize, preserve and extend
access to Biometrika
This content downloaded from
128.54.48.248 on Wed, 28 Feb 2024 01:51:11 +00:00
All use subject to https://about.jstor.org/terms |
2305.14224.pdf | mmT5: Modular Multilingual Pre-Training
Solves Source Language Hallucinations
Jonas Pfeiffer Francesco Piccinno Massimo Nicosia
Xinyi Wang Machel Reid Sebastian Ruder
Google DeepMind
Abstract
Multilingual sequence-to-sequence models per-
form poorly with increased language coverage
and fail to consistently generate text in the cor-
rect target language in few-shot settings. To
address these challenges, we propose mmT5,
a modular multilingual sequence-to-sequence
model. mmT5 utilizes language-specific mod-
ules during pre-training, which disentangle
language-specific information from language-
agnostic information. We identify representa-
tion drift during fine-tuning as a key limita-
tion of modular generative models and develop
strategies that enable effective zero-shot trans-
fer. Our model outperforms mT5 at the same
parameter sizes by a large margin on repre-
sentative natural language understanding and
generation tasks in 40+ languages. Compared
to mT5, mmT5 raises the rate of generating text
in the correct language under zero-shot settings
from 7% to 99%, thereby greatly alleviating the
source language hallucination problem.
1 Introduction
Multilingual pre-trained models (Conneau et al.,
2020a; Xue et al., 2021) have demonstrated im-
pressive performance on natural language under-
standing (NLU) tasks across different languages
(Hu et al., 2020; Ruder et al., 2021). These mod-
els are typically trained on large amounts of unla-
beled data in hundreds of languages. Recent large
language models (Brown et al., 2020; Chowdhery
et al., 2022) display surprising multilingual capa-
bilities despite being pre-trained predominantly on
English data. However, all of these models share
a key limitation: representations of all languages
compete for the model’s limited capacity. As a
result, models perform poorly with an increasing
number of pre-training languages and on languages
with less pre-training data. This is also known
as the “ curse of multilinguality ” (Conneau et al.,
2020a).
Feed
Forward
Add & Norm
Multi-Head
AttentionAdd & NormAdd & NormLannguage 1...
FF DownFF Up
Lannguage nFF DownFF Up
Feed
Forward
Add & Norm
Multi-Head
AttentionAdd & NormAdd & NormLannguage 1...
FF DownFF Up
Lannguage nFF DownFF Up
Add & Norm
Masked
Multi-Head
AttentionFigure 1: Architecture of mmT5. Language-specific
bottleneck modules (dark blue and green components)
are placed after the feed-forward component within each
layer of the Transformer encoder-decoder model.
Natural language generation (NLG) tasks
present another challenge for current multilingual
models, which may overfit to the training languages
and partially forget their generation ability in the
target language (Vu et al., 2022), generating text
with the correct meaning in the wrong language.
We refer to this as the “ source language halluci-
nation problem ”.
To address these two limitations, we propose the
modular multilingual T5 (mmT5, Figure 1), the
first modular multilingual generative model. Dur-
ing pre-training, mmT5 allocates a small amount of
language-specific parameters to increase capacity
for multilingual modeling. At fine-tuning time, we
freeze the language-specific modules while tuning
the shared parameters, allowing direct adaptation to
a target language by swapping to the corresponding
language-specific module.
However, we observe an additional challenge
for mmT5: the fine-tuned shared representationsarXiv:2305.14224v1 [cs.CL] 23 May 2023 |
Hastings1970.pdf | Monte Carlo Sampling Methods Using Markov Chains and Their Applications
W. K. Hastings
Biometrika , Vol. 57, No. 1. (Apr., 1970), pp. 97-109.
Stable URL:
http://links.jstor.org/sici?sici=0006-3444%28197004%2957%3A1%3C97%3AMCSMUM%3E2.0.CO%3B2-C
Biometrika is currently published by Biometrika Trust.
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/about/terms.html . JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained
prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in
the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/journals/bio.html .
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is an independent not-for-profit organization dedicated to and preserving a digital archive of scholarly journals. For
more information regarding JSTOR, please contact support@jstor.org.
http://www.jstor.org
Tue Mar 27 09:47:11 2007 |
2109.10862v2.pdf | Recursively Summarizing Books with Human Feedback
Jeff Wu∗Long Ouyang∗Daniel M. Ziegler∗Nisan Stiennon∗Ryan Lowe∗
Jan Leike∗Paul Christiano∗
OpenAI
Abstract
A major challenge for scaling machine learning is training models to perform
tasks that are very difficult or time-consuming for humans to evaluate. We present
progress on this problem on the task of abstractive summarization of entire fiction
novels. Our method combines learning from human feedback with recursive
task decomposition: we use models trained on smaller parts of the task to assist
humans in giving feedback on the broader task. We collect a large volume of
demonstrations and comparisons from human labelers, and fine-tune GPT-3 using
behavioral cloning and reward modeling to do summarization recursively. At
inference time, the model first summarizes small sections of the book and then
recursively summarizes these summaries to produce a summary of the entire book.
Our human labelers are able to supervise and evaluate the models quickly, despite
not having read the entire books themselves. Our resulting model generates sensible
summaries of entire books, even matching the quality of human-written summaries
in a few cases ( ∼5%of books). We achieve state-of-the-art results on the recent
BookSum dataset for book-length summarization. A zero-shot question-answering
model using these summaries achieves competitive results on the challenging
NarrativeQA benchmark for answering questions about books and movie scripts.
We release datasets of samples from our model.2
1 Introduction
To train an ML model on a new task, we need a training signal that tells the model which behaviors
are better and which are worse. For some tasks, like playing a video game, this training signal can
be calculated automatically. However, for many useful tasks an accurate training signal can only be
provided via a human in the loop. For example, humans can provide demonstrations of the correct
behavior (Bain and Sammut, 1995) or compare two outputs from the model being trained (Christiano
et al., 2017), and this data is used to train the model.
In this paper we focus on tasks that are difficult for humans to supervise or evaluate, either because
the tasks take a lot of time or because they require specialized knowledge and expertise to evaluate.
For example, imagine training a model to summarize an entire sub-field of scientific research. For
a human to provide a demonstration or evaluate the quality of a model-generated summary, they
would likely need a huge amount of time and expertise. One could circumvent this difficulty by using
easier-to-measure proxy objectives (e.g. how often words in the summary relate to the topic, and how
accurate individual sentences in the summary are), but these proxies are usually less aligned with
∗This was a joint project of the OpenAI Alignment team. JW and LO contributed equally. DMZ, NS, and
RL were full-time contributors for most of the duration. JL and PC managed the team. Corresponding author
jeffwu@openai.com.
2See https://openaipublic.blob.core.windows.net/recursive-book-summ/website/index.htmlarXiv:2109.10862v2 [cs.CL] 27 Sep 2021 |
2303.02535.pdf | Streaming Active Learning with Deep Neural Networks
Akanksha Saran1Safoora Yousefi2Akshay Krishnamurthy1John Langford1Jordan T. Ash1
Abstract
Active learning is perhaps most naturally posed as
an online learning problem. However, prior active
learning approaches with deep neural networks
assume offline access to the entire dataset ahead
of time. This paper proposes VeSSAL, a new al-
gorithm for batch active learning with deep neural
networks in streaming settings, which samples
groups of points to query for labels at the mo-
ment they are encountered. Our approach trades
off between uncertainty and diversity of queried
samples to match a desired query rate without
requiring any hand-tuned hyperparameters. Alto-
gether, we expand the applicability of deep neu-
ral networks to realistic active learning scenarios,
such as applications relevant to HCI and large,
fractured datasets.
1. Introduction
Active learning considers a supervised learning situation
where unlabeled data are abundant, but acquiring labels is
expensive (Settles, 2010; Dasgupta, 2011). One example
of this might be classifying underlying disorders from his-
tological images, where obtaining labels involves querying
medical experts. Another might be predicting drug effi-
cacy, where labels corresponding to candidate molecules
could require clinical trials or intensive computational ex-
periments. In these settings, we typically want to carefully
consider what samples to request labels for, and to obtain
labels for data that are maximally useful for progressing the
performance of the model.
Active learning is a classic problem in machine learning,
with traditional approaches typically considering the convex
and well-specified regime (Settles, 2010; Dasgupta, 2011;
Hanneke, 2014a). Much recent interest in active learning
has turned to the neural network case, which requires some
special considerations. One such consideration is the ex-
1Microsoft Research NYC2Microsoft Bing. Correspondence
to: Akanksha Saran <akankshasaran@utexas.edu >.
Proceedings of the 40thInternational Conference on Machine
Learning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).pense associated with fitting these neural architectures —
when used in conjunction with a sequentially growing train-
ing set, as one has in active learning, the model cannot be
initialized from the previous round of optimization without
damaging generalization performance. Instead, practition-
ers typically re-initialize model parameters each time new
data are acquired and train the model from scratch (Ash &
Adams, 2020). This structure has repositioned active learn-
ing to focus on the batch domain, where we are interested
in simultaneously labeling a batch of ksamples to be inte-
grated into the training set. The model is typically retrained
only after the entire batch has been labeled.
In the convex case, where a model can easily be updated
to accommodate for a single sample, active learning algo-
rithms have tended to focus on uncertainty or sensitivity.
That is, a label for a given sample should be requested if
the model is highly uncertain about its corresponding la-
bel, or if incorporating this sample into the training set
will greatly reduce the set of plausible model weights. In
contrast, a high-performing, batch-mode active learning al-
gorithm must also consider diversity. If two samples are
relatively similar to each other, it is inefficient to include
them both in the batch, regardless of the model’s uncertainty
about their labels; having only one such sample labeled and
integrated into the current hypothesis may be enough to
resolve the model’s uncertainty on the other.
Popular approaches for batch active learning rely on sam-
plers that require all unlabeled data to be simultaneously
available. This reliance poses several major concerns for
the deployment of these algorithms. For one, the run time
of these methods is conditioned on the number of unlabeled
samples in a way that makes them unusable for extremely
large datasets. To exacerbate the issue, it is unclear how to
deploy these algorithms on modern databases, where sam-
ples might be stored in a fractured manner and cannot easily
be made available in their entirety.
It is especially unclear how to perform active learning in
a streaming setting, where data are not all simultaneously
available, and we do not know how many samples will
be encountered. Here we might instead prefer to specify
an acceptable labeling rate rather than a fixed acceptable
batch size. In this streaming setup, it is further desirable to
commit to a decision about whether to include an unlabeled
1arXiv:2303.02535v2 [cs.LG] 6 Jun 2023 |
rules_of_ml.pdf |
Rules of Machine Learning:
Best Practices for ML Engineering
Martin Zinkevich
This document is intended to help those with a basic knowledge of machine learning get the
benefit of best practices in machine learning from around Google. It presents a style for machine
learning, similar to the Google C++ Style Guide and other popular guides to practical
programming. If you have taken a class in machine learning, or built or worked on a
machinelearned model, then you have the necessary background to read this document.
Terminology
Overview
Before Machine Learning
Rule #1: Don’t be afraid to launch a product without machine learning.
Rule #2: Make metrics design and implementation a priority.
Rule #3: Choose machine learning over a complex heuristic.
ML Phase I: Your First Pipeline
Rule #4: Keep the first model simple and get the infrastructure right.
Rule #5: Test the infrastructure independently from the machine learning.
Rule #6: Be careful about dropped data when copying pipelines.
Rule #7: Turn heuristics into features, or handle them externally.
Monitoring
Rule #8: Know the freshness requirements of your system.
Rule #9: Detect problems before exporting models.
Rule #10: Watch for silent failures.
Rule #11: Give feature sets owners and documentation.
Your First Objective
Rule #12: Don’t overthink which objective you choose to directly optimize.
Rule #13: Choose a simple, observable and attributable metric for your first
objective.
Rule #14: Starting with an interpretable model makes debugging easier.
Rule #15: Separate Spam Filtering and Quality Ranking in a Policy Layer.
ML Phase II: Feature Engineering
Rule #16: Plan to launch and iterate.
Rule #17: Start with directly observed and reported features as opposed to learned
features. |
stochastic-backprop-and-approximate-inference.pdf | Stochastic Backpropagation and Approximate Inference
in Deep Generative Models
Danilo J. Rezende, Shakir Mohamed, Daan Wierstra
{danilor, shakir, daanw }@google.com
Google DeepMind, London
Abstract
We marry ideas from deep neural networks
and approximate Bayesian inference to derive
a generalised class of deep, directed genera-
tive models, endowed with a new algorithm
for scalable inference and learning. Our algo-
rithm introduces a recognition model to rep-
resent an approximate posterior distribution
and uses this for optimisation of a variational
lower bound. We develop stochastic back-
propagation – rules for gradient backpropa-
gation through stochastic variables – and de-
rive an algorithm that allows for joint optimi-
sation of the parameters of both the genera-
tive and recognition models. We demonstrate
on several real-world data sets that by using
stochastic backpropagation and variational
inference, we obtain models that are able to
generate realistic samples of data, allow for
accurate imputations of missing data, and
provide a useful tool for high-dimensional
data visualisation.
1. Introduction
There is an immense effort in machine learning and
statistics to develop accurate and scalable probabilistic
models of data. Such models are called upon whenever
we are faced with tasks requiring probabilistic reason-
ing, such as prediction, missing data imputation and
uncertainty estimation; or in simulation-based analy-
ses, common in many scientific fields such as genetics,
robotics and control that require generating a large
number of independent samples from the model.
Recent efforts to develop generative models have fo-
cused on directed models, since samples are easily ob-
tained by ancestral sampling from the generative pro-
cess. Directed models such as belief networks and sim-
ilar latent variable models (Dayan et al., 1995; Frey,
1996; Saul et al., 1996; Bartholomew & Knott, 1999;
Proceedings of the 31stInternational Conference on Ma-
chine Learning , Beijing, China, 2014. JMLR: W&CP vol-
ume 32. Copyright 2014 by the author(s).Uria et al., 2014; Gregor et al., 2014) can be easily sam-
pled from, but in most cases, efficient inference algo-
rithms have remained elusive. These efforts, combined
with the demand for accurate probabilistic inferences
and fast simulation, lead us to seek generative models
that are i) deep, since hierarchical architectures allow
us to capture complex structure in the data, ii) al-
low for fast sampling of fantasy data from the inferred
model, and iii) are computationally tractable and scal-
able to high-dimensional data.
We meet these desiderata by introducing a class of
deep, directed generative models with Gaussian la-
tent variables at each layer. To allow for efficient and
tractable inference, we use introduce an approximate
representation of the posterior over the latent variables
using a recognition model that acts as a stochastic en-
coder of the data. For the generative model, we de-
rive the objective function for optimisation using vari-
ational principles; for the recognition model, we spec-
ify its structure and regularisation by exploiting recent
advances in deep learning. Using this construction, we
can train the entire model by a modified form of gra-
dient backpropagation that allows for optimisation of
the parameters of both the generative and recognition
models jointly.
We build upon the large body of prior work (in section
6) and make the following contributions:
•We combine ideas from deep neural networks and
probabilistic latent variable modelling to derive a
general class of deep, non-linear latent Gaussian
models (section 2).
•We present a new approach for scalable varia-
tional inference that allows for joint optimisation
of both variational and model parameters by ex-
ploiting the properties of latent Gaussian distri-
butions and gradient backpropagation (sections 3
and 4).
•We provide a comprehensive and systematic eval-
uation of the model demonstrating its applicabil-
ity to problems in simulation, visualisation, pre-
diction and missing data imputation (section 5).arXiv:1401.4082v3 [stat.ML] 30 May 2014 |
10.1016.j.cell.2023.12.026.pdf | Article
Immune evasion, infectivity, and fusogenicity of
SARS-CoV-2 BA.2.86 and FLip variants
Graphical abstract
Highlights
dBA.2.86 is less immune evasive compared to FLip and other
XBB variants
dBA.2.86 is antigenically more similar to BA.2 and BA.4/5 thanXBB variants
dMAb S309 is unable to neutralize BA.2.86 possiblycontributed by a D339H mutation
dThe fusion and infectivity of BA.2.86 is higher than XBBvariants in CaLu-3 cellsAuthors
Panke Qu, Kai Xu, Julia N. Faraone, ...,Daniel Jones, Richard J. Gumina,Shan-Lu Liu
Correspondence
liu.6244@osu.edu
In brief
The SARS-CoV-2 BA.2.86 variant is lessresistant to neutralization by bivalentvaccine-induced antibodies compared toFLip and other XBB variants but moreresistant to mAb S309. BA.2.86 showshigher fusogenicity and infectivity inCaLu-3 cells compared to that in 293T-ACE2 cells.
Qu et al., 2024, Cell 187, 585–595
February 1, 2024 ª2023 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.026 ll
|
10.1016.j.cell.2023.12.032.pdf | Article
DNA-guided transcription factor cooperativity
shapes face and limb mesenchyme
Graphical abstract
Highlights
dMutually dependent binding of TWIST1 and homeodomain
TFs in embryonic mesenchyme
dTF co-binding drives enhancer accessibility and sharedtranscriptional regulation
dWeak TF-TF contacts guided by DNA mediate the selectivityof cooperating partners
dTWIST1, partners, and bound targets enriched for face-shape-associated SNPsAuthors
Seungsoo Kim, Ekaterina Morgunova,Sahin Naqvi, ..., Peter Claes,Jussi Taipale, Joanna Wysocka
Correspondence
wysocka@stanford.edu
In brief
Epigenomic, biochemical, structural, andhuman phenotypic analyses oftranscription factors that regulate acomposite DNA motif in the embryonicface and limb mesenchyme reveal howDNA-guided cooperative binding givesrise to specificity among members oflarge TF families. This cooperativitypromotes the integration of cellular andpositional identity programs andcontributes to the evolution andindividual variation of human facial shape.
Kim et al., 2024, Cell 187, 692–711
February 1, 2024 ª2023 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.032 ll
|
10.1101.2023.04.30.538439.pdf | scGPT: Towards Building a Foundation Model for Single-Cell 1
Multi-omics Using Generative AI 2
Haotian Cui1,2,3 ∗, Chloe Wang1,2,3∗, Hassaan Maan1,3,4, Bo Wang1,2,3,4,5 †3
1Peter Munk Cardiac Centre, University Health Network, Toronto, ON, Canada 4
2Department of Computer Science, University of Toronto, Toronto, ON, Canada 5
3Vector Institute, Toronto, ON, Canada 6
4Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada 7
5Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, 8
Canada 9
Abstract 10
Generative pre-trained models have achieved remarkable success in various domains such as nat- 11
ural language processing and computer vision. Specifically, the combination of large-scale diverse 12
datasets and pre-trained transformers has emerged as a promising approach for developing founda- 13
tion models. While texts are made up of words, cells can be characterized by genes. This analogy 14
inspires us to explore the potential of foundation models for cell and gene biology. By leveraging the 15
exponentially growing single-cell sequencing data, we present the first attempt to construct a single- 16
cell foundation model through generative pre-training on over 10 million cells. We demonstrate that 17
the g enerative p re-trained t ransformer, scGPT, effectively captures meaningful biological insights 18
into genes and cells. Furthermore, the model can be readily finetuned to achieve state-of-the-art 19
performance across a variety of downstream tasks, including multi-batch integration, multi-omic 20
integration, cell-type annotation, genetic perturbation prediction, and gene network inference. The 21
scGPT codebase is publicly available at https://github.com/bowang-lab/scGPT. 22
1 Main 23
Generative pre-trained models have recently achieved unprecedented success in many domains. The 24
most well-known applications include computer vision and natural language generation (NLG) [44, 25
43, 45]. These foundation models such as DALL-E2 and GPT-4 follow a similar paradigm of pre- 26
training transformers on large-scale diverse datasets [43, 45]. These foundation models can be 27
readily tailored to a variety of downstream tasks and scenarios. More interestingly, they demon- 28
strate improved performance on multiple tasks compared to task-specific models trained from 29
scratch [22, 58,47]. This showcases strong evidence of a task-agnostic and “deep” understanding 30
∗These authors contributed equally.
†Corresponding author. Email: bowang@vectorinstitute.ai
1. CC-BY-NC-ND 4.0 International license available under awas not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted May 1, 2023. ; https://doi.org/10.1101/2023.04.30.538439doi: bioRxiv preprint |
56-preference-proxies-evaluating-.pdf | Preference Proxies: Evaluating Large Language Models in capturing Human
Preferences in Human-AI Tasks
Mudit Verma* 1Siddhant Bhambri* 1Subbarao Kambhampati1
Abstract
In this work, we investigate the potential of Large
Language Models (LLMs) to serve as effective
human proxies by capturing human preferences
in the context of collaboration with AI agents. Fo-
cusing on two key aspects of human preferences
- explicability and sub-task specification in team
settings - we explore LLMs’ ability to not only
model mental states but also understand human
reasoning processes. By developing scenarios
where optimal AI performance relies on modeling
human mental states and reasoning, our investi-
gation involving two different preference types
and a user study (with 17 participants) contributes
valuable insights into the suitability of LLMs as
“Preference Proxies” in various human-AI appli-
cations, paving the way for future research on
the integration of AI agents with human users in
Human-Aware AI tasks.
1. Introduction
As Artificial Intelligence (AI) progresses, the development
of the next generation of AI agents requires an enhanced
understanding of human thought, processes and behaviors.
A vital component of this understanding is the Theory of
Mind (ToM), which involves attributing mental states – such
as beliefs, intentions, desires, and emotions – to oneself and
others, and to understand that these mental states may dif-
fer from one’s own. Large language models (LLMs) have
demonstrated exceptional abilities in various tasks that hu-
mans excel at (Hagendorff, 2023; Frieder et al., 2023; Ko-
rinek, 2023; Shen et al., 2023; Bubeck et al., 2023), making
them suitable candidates for exploring the capabilities of
ToM in AI systems (Kosinski, 2023).
Research on LLM’s ToM capacities has primarily focused
on their ability to model mental states associated with social
and emotional reasoning, as well as logical problem-solving
*Equal contribution1SCAI, Arizona State University, USA.
Correspondence to: Mudit Verma <muditverma@asu.edu >.
Preprint under review
Figure 1: The various roles of Large Language Models in
Human Aware AI interaction as a Human Proxy, Translator
(common lingua franca), and the Actor. In this work, we
investigate the role of LLMs as a Human Proxy (called
Preference Proxies) especially when they have to provide
answers to queries meant for eliciting human in the loop’s
preferences.
(Kosinski, 2023; Baker et al., 2011; Wellman et al., 2001;
Astington & Baird, 2005; Cuzzolin et al., 2020; Rescorla,
2015; C ¸elikok et al., 2019). While LLMs have been used
for several tasks like summarization, text generation, com-
prehension, conversations etc. there is limited literature on
testing LLM’s ability to predict human preferences. Since
these LLMs are infact trained on human generated data
available in the wild (Brown et al., 2020) and have been fine-
tuned with human feedback on various prompts (Ouyang
et al., 2022) a natural question arises :
Can LLMs capture human preferences?
We investigate whether LLMs can serve as human-proxy to
the real human in the loop (HiL) and answer queries made by
an AI agent meant for the real human. Several prior works
in learning human preferences have leveraged human feed-
backs of some form, like binary feedback, demonstration,
natural language guidance, action guidance, etc. We expect
the LLM to work for an AI agent that is acting in the world
(powered by an reinforcement learning, planning or other
sequential decision-making engines). A common theme
across these works has been to model a reward function that
captures human’s expectations from the agent. Therefore, |
10.1038.s41586-019-1923-7.pdf | 706 | Nature | Vol 577 | 30 January 2020
ArticleImproved protein structure prediction using
potentials from deep learning
Andrew W. Senior1,4*, Richard Evans1,4, John Jumper1,4, James Kirkpatrick1,4, Laurent Sifre1,4,
Tim Green1, Chongli Qin1, Augustin Žídek1, Alexander W. R. Nelson1, Alex Bridgland1,
Hugo Penedones1, Stig Petersen1, Karen Simonyan1, Steve Crossan1, Pushmeet Kohli1,
David T . Jones2,3, David Silver1, Koray Kavukcuoglu1 & Demis Hassabis1
Protein structure prediction can be used to determine the three-dimensional shape of
a protein from its amino acid sequence1. This problem is of fundamental importance
as the structure of a protein largely determines its function2; however, protein
structures can be difficult to determine experimentally. Considerable progress has
recently been made by leveraging genetic information. It is possible to infer which amino acid residues are in contact by analysing covariation in homologous
sequences, which aids in the prediction of protein structures
3. Here we show that we
can train a neural network to make accurate predictions of the distances between
pairs of residues, which convey more information about the structure than contact
predictions. Using this information, we construct a potential of mean force4 that can
accurately describe the shape of a protein. We find that the resulting potential can be
optimized by a simple gradient descent algorithm to generate structures without
complex sampling procedures. The resulting system, named AlphaFold, achieves high
accuracy, even for sequences with fewer homologous sequences. In the recent Critical
Assessment of Protein Structure Prediction5 (CASP13)—a blind assessment of the state
of the field—AlphaFold created high-accuracy structures (with template modelling
(TM) scores6 of 0.7 or higher) for 24 out of 43 free modelling domains, whereas the
next best method, which used sampling and contact information, achieved such accuracy for only 14 out of 43 domains. AlphaFold represents a considerable advance
in protein-structure prediction. We expect this increased accuracy to enable insights
into the function and malfunction of proteins, especially in cases for which no
structures for homologous proteins have been experimentally determined
7.
Proteins are at the core of most biological processes. As the function of
a protein is dependent on its structure, understanding protein struc-
tures has been a grand challenge in biology for decades. Although
several experimental structure determination techniques have been
developed and improved in accuracy, they remain difficult and time-
consuming2. As a result, decades of theoretical work has attempted to
predict protein structures from amino acid sequences.
CASP5 is a biennial blind protein structure prediction assessment
run by the structure prediction community to benchmark progress in
accuracy. In 2018, AlphaFold joined 97 groups from around the world in
entering CASP138. Each group submitted up to 5 structure predictions
for each of 84 protein sequences for which experimentally determined
structures were sequestered. Assessors divided the proteins into 104
domains for scoring and classified each as being amenable to template-
based modelling (TBM, in which a protein with a similar sequence has
a known structure, and that homologous structure is modified in
accordance with the sequence differences) or requiring free model -
ling (FM, in cases in which no homologous structure is available), with an intermediate (FM/TBM) category. Figure 1a shows that AlphaFold
predicts more FM domains with high accuracy than any other system,
particularly in the 0.6–0.7 TM-score range. The TM score—ranging
between 0 and 1—measures the degree of match of the overall (back -
bone) shape of a proposed structure to a native structure. The assessors
ranked the 98 participating groups by the summed, capped z -scores of
the structures, separated according to category. AlphaFold achieved
a summed z-score of 52.8 in the FM category (best-of-five) compared
with 36.6 for the next closest group (322). Combining FM and TBM/FM
categories, AlphaFold scored 68.3 compared with 48.2. AlphaFold is
able to predict previously unknown folds to high accuracy (Fig. 1b).
Despite using only FM techniques and not using templates, AlphaFold
also scored well in the TBM category according to the assessors’ for -
mula 0-capped z-score, ranking fourth for the top-one model or first
for the best-of-five models. Much of the accuracy of AlphaFold is due to the accuracy of the distance predictions, which is evident from the
high precision of the corresponding contact predictions (Fig. 1c and
Extended Data Fig. 2a).https://doi.org/10.1038/s41586-019-1923-7
Received: 2 April 2019
Accepted: 10 December 2019
Published online: 15 January 2020
1DeepMind, London, UK. 2The Francis Crick Institute, London, UK. 3University College London, London, UK. 4These authors contributed equally: Andrew W. Senior, Richard Evans, John Jumper,
James Kirkpatrick, Laurent Sifre. *e-mail: andrewsenior@google.com |
2211.17192.pdf | Fast Inference from Transformers via Speculative Decoding
Yaniv Leviathan* 1Matan Kalman* 1Yossi Matias1
Abstract
Inference from large autoregressive models like
Transformers is slow - decoding Ktokens takes
Kserial runs of the model. In this work we in-
troduce speculative decoding - an algorithm to
sample from autoregressive models faster without
any changes to the outputs , by computing several
tokens in parallel. At the heart of our approach lie
the observations that (1) hard language-modeling
tasks often include easier subtasks that can be ap-
proximated well by more efficient models, and
(2) using speculative execution and a novel sam-
pling method, we can make exact decoding from
the large models faster, by running them in par-
allel on the outputs of the approximation mod-
els, potentially generating several tokens concur-
rently, and without changing the distribution. Our
method can accelerate existing off-the-shelf mod-
els without retraining or architecture changes. We
demonstrate it on T5-XXL and show a 2X-3X
acceleration compared to the standard T5X imple-
mentation, with identical outputs.
1. Introduction
Large autoregressive models, notably large Transformers
(Vaswani et al., 2017), are much more capable than smaller
models, as is evidenced countless times in recent years e.g.,
in the text or image domains, like GPT-3 (Brown et al.,
2020), LaMDA (Thoppilan et al., 2022), Parti (Yu et al.,
2022), and PaLM (Chowdhery et al., 2022). Unfortunately,
a single decode step from these larger models is significantly
slower than a step from their smaller counterparts, and mak-
ing things worse, these steps are done serially - decoding K
tokens takes Kserial runs of the model.
Given the importance of large autoregressive models and
specifically large Transformers, several approaches were
*Equal contribution1Google Research, Mountain
View, CA, USA. Correspondence to: Yaniv Leviathan
<leviathan@google.com >.
Proceedings of the 40thInternational Conference on Machine
Learning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).developed to make inference from them faster. Some ap-
proaches aim to reduce the inference cost for allinputs
equally (e.g. Hinton et al., 2015; Jaszczur et al., 2021;
Hubara et al., 2016; So et al., 2021; Shazeer, 2019). Other
approaches stem from the observation that not all infer-
ence steps are born alike - some require a very large model,
while others can be approximated well by more efficient
models. These adaptive computation methods (e.g. Han
et al., 2021; Sukhbaatar et al., 2019; Schuster et al., 2021;
Scardapane et al., 2020; Bapna et al., 2020; Elbayad et al.,
2019; Schwartz et al., 2020) aim to use less compute re-
sources for easier inference steps. While many of these
solutions have proven extremely effective in practice, they
usually require changing the model architecture, changing
the training-procedure and re-training the models, and don’t
maintain identical outputs.
The key observation above, that some inference steps are
“harder” and some are “easier”, is also a key motivator for
our work. We additionally observe that inference from large
models is often not bottlenecked on arithmetic operations,
but rather on memory bandwidth and communication, so
additional computation resources might be available. There-
fore we suggest increasing concurrency as a complemen-
tary approach to using an adaptive amount of computation.
Specifically, we are able to accelerate inference without
changing the model architectures, without changing the
training-procedures or needing to re-train the models, and
without changing the model output distribution. This is
accomplished via speculative execution .
Speculative execution (Burton, 1985; Hennessy & Patterson,
2012) is an optimization technique, common in processors,
where a task is performed in parallel to verifying if it’s
actually needed - the payoff being increased concurrency.
A well-known example of speculative execution is branch
prediction. For speculative execution to be effective, we
need an efficient mechanism to suggest tasks to execute
that are likely to be needed. In this work, we generalize
speculative execution to the stochastic setting - where a
taskmight be needed with some probability. Applying this
to decoding from autoregressive models like Transformers,
we sample generations from more efficient approximation
models as speculative prefixes for the slower target mod-
els. With a novel sampling method, speculative sampling ,
we maximize the probability of these speculative tasks to
1arXiv:2211.17192v2 [cs.LG] 18 May 2023 |
MLSB2021-Deep-generative-models-create.pdf | Deep generative models create new and diverse
protein structures
Zeming Lin
NYU & FAIR
zl2799@nyu.edu,zlin@fb.comTom Sercu
FAIR
tsercu@fb.comYann LeCun
NYU & FAIR
yann@nyu.edu,yann@fb.com
Alexander Rives
FAIR
arives@fb.com
Abstract
We explore the use of modern variational autoencoders for generating protein
structures. Models are trained across a diverse set of natural protein domains. Three-
dimensional structures are encoded implicitly in the form of an energy function
that expresses constraints on pairwise distances and angles. Atomic coordinates
are recovered by optimizing the parameters of a rigid body representation of
the protein chain to fit the constraints. The model generates diverse structures
across a variety of folds, and exhibits local coherence at the level of secondary
structure, generating alpha helices and beta sheets, as well as globally coherent
tertiary structure. A number of generated protein sequences have high confidence
predictions by AlphaFold that agree with their designs. The majority of these have
no significant sequence homology to natural proteins.
Most designed proteins are variations on existing proteins. It is of great interest to create de novo
proteins that go beyond what has been invented by nature. A line of recent work has explored
generative models for protein structures [ 1,2,3,4,5,6]. The main challenge for a generative
model is to propose stable structures that can be realized as the minimum energy state for a protein
sequence, i.e. the endpoint of folding. The space of possible three-dimensional conformations of a
protein sequence is exponentially large [ 7], but out of this set of possible conformations, most do not
correspond to stable realizable structures.
In this work we explore the use of modern variational autoencoders (V AEs) as generative models
of protein structures. We find that the models can produce coherent local and global structural
organization while proposing varied and diverse folds. We use AlphaFold to assess the viability of
sampled sequences, finding that many sequences are predicted to fold with high confidence to their
designed structures. To assess the novelty of the generated sequences, we search sequence databases
including metagenomic information for homologous sequences, finding no significant matches for a
large fraction of the generations.
1 Modeling
1.1 Overview
Figure 1 presents an overview of the approach. The structure is implicitly encoded as the min-
imum of an energy over possible conformations of the protein chain. We write the structure
x∗= argmin xE(x;z) +R(x)as the outcome of this minimization. E(x;z)is the output of
a decoder. Optionally R(x)subsumes additional energy terms. During training an encoder and
Machine Learning for Structural Biology Workshop, NeurIPS 2021. |
10.1101.2024.03.21.585615.pdf | Engineeringhighlyactiveanddiversenuclease
enzymesbycombiningmachinelearningand
ultra-high-throughputscreening
NeilThomas*,1,DavidBelanger*,2,ChenlingXu3,HansonLee3,KathleenHirano3,KosukeIwai3,
VanjaPolic3,KendraDNyberg3,KevinHoff3,LucasFrenz3,CharlieAEmrich1,JunWKim1,
MariyaChavarha4,AbiRamanan1,JeremyJAgresti3,LucyJColwell2,5
1X,theMoonshotFactory
2GoogleDeepmind
3Triplebar
4GoogleAcceleratedSciences
5Dept.ofChemistry,CambridgeUniversity
*denotesequalcontribution
Correspondenceto:NeilThomas<thomas.a.neil@gmail.com>,DavidBelanger
<dbelanger@google.com>,LucyColwell<lcolwell@google.com>
Abstract
Designing enzymes to function in novel chemical environments is a central goal of synthetic
biology with broad applications. In this work, we describe a campaign guided by
machine-learning (ML) to engineer the nuclease NucB, an enzyme with applications in the
treatment of chronic wounds. In a multi-round enzyme evolution campaign, we combined
ultra-high-throughput functional screening with ML and compared it to parallel campaigns of
in-vitrodirected evolution (DE) and in-silicohit recombination (HR) . The ML-guided campaign
discovered hundreds of highly-active variants with up to 19-fold nuclease activityimprovement,
outperforming the 12-fold improvement discovered by DE. Further, the ML-designed hits were
up to 15 mutations away from the NucB wildtype,faroutperformingtheHRapproachinbothhit
rate and diversity. We also showthatmodelstrainedonevolutionarydataalone,withoutaccess
to any experimental data, can design functional variants at a significantly higher rate than a
traditional approach to initial library generation. To drive future progress in ML-guided design,
we curate a dataset of 55K diverse variants, one of the most extensive genotype-phenotype
enzyme activity landscapes to date. Data and code is available at:
https://github.com/google-deepmind/nuclease_design.
Introduction
The ability to engineer proteins has revolutionized applications in industry and therapeutics1–6.
Generally, a proteinengineeringcampaigncanbedividedintotwostages7–9.First,the discovery. CC-BY 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted March 27, 2024. ; https://doi.org/10.1101/2024.03.21.585615doi: bioRxiv preprint |
2112.04426.pdf | Improving language models by retrieving
from trillions of tokens
Sebastian Borgeaudy, Arthur Menschy, Jordan Hoffmanny, Trevor Cai, Eliza Rutherford, Katie Millican,
George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas,
Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones,
Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero,
Karen Simonyan, Jack W. Raez, Erich Elsenzand Laurent Sifrey,z
All authors from DeepMind,yEqual contributions,zEqual senior authorship
We enhance auto-regressive language models by conditioning on document chunks retrieved from a
large corpus, based on local similarity with preceding tokens. With a 2 trillion token database, our
Retrieval-Enhanced Transformer ( R/e.sc/t.sc/r.sc/o.sc) obtains comparable performance to GPT-3 and Jurassic-1
on the Pile, despite using 25 fewer parameters. After fine-tuning, R/e.sc/t.sc/r.sc/o.scperformance translates to
downstream knowledge-intensive tasks such as question answering. R/e.sc/t.sc/r.sc/o.sccombines a frozen B/e.sc/r.sc/t.sc
retriever,adifferentiableencoderandachunkedcross-attentionmechanismtopredicttokensbasedon
an order of magnitude more data than what is typically consumed during training. We typically train
R/e.sc/t.sc/r.sc/o.sc from scratch, yet can also rapidly R/e.sc/t.sc/r.sc/o.scfit pre-trained transformers with retrieval and still
achieve good performance. Our work opens up new avenues for improving language models through
explicit memory at unprecedented scale.
1. Introduction
Language modelling (LM) is an unsupervised task that consists of modelling the probability of text,
usually by factorising it into conditional next-token predictions 𝑝¹𝑥1𝑥𝑛º=Î
𝑖𝑝¹𝑥𝑖j𝑥𝑖º. Neural
networks have proven to be powerful language models, first in the form of recurrent architectures
(Graves, 2013; Jozefowicz et al., 2016; Mikolov et al., 2010) and more recently in the form of
Transformers (Vaswani et al., 2017), that use attention to contextualise the past. Large performance
improvementshavecomefromincreasingtheamountofdata,trainingcompute,ormodelparameters.
Transformers have been scaled from 100million parameter models in seminal work to over hundred
billion parameters (Brown et al., 2020; Radford et al., 2019) in the last two years which has led to
models that do very well on a wide array of tasks in a zero or few-shot formulation. Increasing model
size predictably improves performance on a wide range of downstream tasks (Kaplan et al., 2020).
The benefits of increasing the number of parameters come from two factors: additional computations
at training and inference time, and increased memorization of the training data.
Inthiswork,weendeavortodecouplethese,byexploringefficientmeansofaugmentinglanguage
models with a massive-scale memory without significantly increasing computations. Specifically, we
suggest retrieval from a large text database as a complementary path to scaling language models.
Instead of increasing the size of the model and training on more data, we equip models with the
ability to directly access a large database to perform predictions—a semi-parametric approach. At
a high level, our Retrieval Transformer ( R/e.sc/t.sc/r.sc/o.sc) model splits the input sequence into chunks and
retrieves text similar to the previous chunk to improve the predictions in the current chunk. Existing
retrieval for language modelling work only considers small transformers ( 100millions parameters)
and databases of limited size (up to billions of tokens) (Guu et al., 2020; Khandelwal et al., 2020;
Lewisetal.,2020;Yogatamaetal.,2021). Toourknowledge,ourworkisthefirsttoshowthebenefits
of scaling the retrieval database to trillions of tokens for large parametric language models. Our main
Corresponding authors: {sborgeaud|amensch|jordanhoffmann|sifre}@deepmind.comarXiv:2112.04426v3 [cs.CL] 7 Feb 2022 |
10.1038.s41586-023-06291-2.pdf | Nature | www.nature.com | 1
ArticleLarge language models encode clinical
knowledge
Karan Singhal1,4 ✉, Shekoofeh Azizi1,4 ✉, Tao Tu1,4, S. Sara Mahdavi1, Jason Wei1,
Hyung Won Chung1, Nathan Scales1, Ajay Tanwani1, Heather Cole-Lewis1, Stephen Pfohl1,
Perry Payne1, Martin Seneviratne1, Paul Gamble1, Chris Kelly1, Abubakr Babiker1,
Nathanael Schärli1, Aakanksha Chowdhery1, Philip Mansfield1, Dina Demner-Fushman2,
Blaise Agüera y Arcas1, Dale Webster1, Greg S. Corrado1, Yossi Matias1, Katherine Chou1,
Juraj Gottweis1, Nenad Tomasev3, Yun Liu1, Alvin Rajkomar1, Joelle Barral1,
Christopher Semturs1, Alan Karthikesalingam1,5 ✉ & Vivek Natarajan1,5 ✉
Large language models (LLMs) have demonstrated impressive capabilities, but the
bar for clinical applications is high. Attempts to assess the clinical knowledge of
models typically rely on automated evaluations based on limited benchmarks. Here, to address these limitations, we present MultiMedQA, a benchmark combining six
existing medical question answering datasets spanning professional medicine,
research and consumer queries and a new dataset of medical questions searched
online, HealthSearchQA. We propose a human evaluation framework for model answers along multiple axes including factuality, comprehension, reasoning, possible
harm and bias. In addition, we evaluate Pathways Language Model
1 (PaLM, a 540-billion
parameter LLM) and its instruction-tuned variant, Flan-PaLM2 on MultiMedQA. Using
a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA
3, MedMCQA4, PubMedQA5
and Measuring Massive Multitask Language Understanding (MMLU) clinical topics6),
including 67.6% accuracy on MedQA (US Medical Licensing Exam-style questions), surpassing the prior state of the art by more than 17%. However, human evaluation
reveals key gaps. To resolve this, we introduce instruction prompt tuning, a parameter-
efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians.
We show that comprehension, knowledge recall and reasoning improve with model
scale and instruction prompt tuning, suggesting the potential utility of LLMs in
medicine. Our human evaluations reveal limitations of today’s models, reinforcing
the importance of both evaluation frameworks and method development in creating safe, helpful LLMs for clinical applications.
Medicine is a humane endeavour in which language enables key interac -
tions for and between clinicians, researchers and patients. Yet, today’s
artificial intelligence (AI) models for applications in medicine and
healthcare have largely failed to fully utilize language. These models,
although useful, are predominantly single-task systems (for example,
for classification, regression or segmentation) lacking expressivity and
interactive capabilities1–3. As a result, there is a discordance between
what today’s models can do and what may be expected of them in
real-world clinical workflows4.
Recent advances in LLMs offer an opportunity to rethink AI sys -
tems, with language as a tool for mediating human–AI interaction.
LLMs are ‘foundation models’5, large pre-trained AI systems that can
be repurposed with minimal effort across numerous domains and
diverse tasks. These expressive and interactive models offer great promise in their ability to learn generally useful representations from
the knowledge encoded in medical corpora, at scale. There are several
exciting potential applications of such models in medicine, includ -
ing knowledge retrieval, clinical decision support, summarization
of key findings, triaging patients, addressing primary care concerns
and more.
However, the safety-critical nature of the domain necessitates
thoughtful development of evaluation frameworks, enabling research -
ers to meaningfully measure progress and capture and mitigate poten-
tial harms. This is especially important for LLMs, since these models
may produce text generations (hereafter referred to as ‘generations’)
that are misaligned with clinical and societal values. They may, for
instance, hallucinate convincing medical misinformation or incorpo -
rate biases that could exacerbate health disparities.https://doi.org/10.1038/s41586-023-06291-2
Received: 25 January 2023
Accepted: 5 June 2023
Published online: xx xx xxxx
Open access
Check for updates
1Google Research, Mountain View, CA, USA. 2National Library of Medicine, Bethesda, MD, USA. 3DeepMind, London, UK. 4These authors contributed equally: Karan Singhal, Shekoofeh Azizi, Tao Tu.
5These authors jointly supervised this work: Alan Karthikesalingam, Vivek Natarajan. ✉e-mail: karansinghal@google.com; shekazizi@google.com; alankarthi@google.com; natviv@google.com
|
NeurIPS-2020-learning-to-summarize-with-human-feedback-Paper.pdf | Learning to summarize from human feedback
Nisan Stiennon∗Long Ouyang∗Jeff Wu∗Daniel M. Ziegler∗Ryan Lowe∗
Chelsea Voss∗Alec Radford Dario Amodei Paul Christiano∗
OpenAI
Abstract
As language models become more powerful, training and evaluation are increas-
ingly bottlenecked by the data and metrics used for a particular task. For example,
summarization models are often trained to predict human reference summaries and
evaluated using ROUGE, but both of these metrics are rough proxies for what we
really care about—summary quality. In this work, we show that it is possible to
significantly improve summary quality by training a model to optimize for human
preferences. We collect a large, high-quality dataset of human comparisons be-
tween summaries, train a model to predict the human-preferred summary, and use
that model as a reward function to fine-tune a summarization policy using reinforce-
ment learning. We apply our method to a version of the TL;DR dataset of Reddit
posts [ 63] and find that our models significantly outperform both human reference
summaries and much larger models fine-tuned with supervised learning alone. Our
models also transfer to CNN/DM news articles [ 22], producing summaries nearly
as good as the human reference without any news-specific fine-tuning.2We con-
duct extensive analyses to understand our human feedback dataset and fine-tuned
models.3We establish that our reward model generalizes to new datasets, and that
optimizing our reward model results in better summaries than optimizing ROUGE
according to humans. We hope the evidence from our paper motivates machine
learning researchers to pay closer attention to how their training loss affects the
model behavior they actually want.
1 Introduction
Large-scale language model pretraining has become increasingly prevalent for achieving high per-
formance on a variety of natural language processing (NLP) tasks. When applying these models
to a specific task, they are usually fine-tuned using supervised learning, often to maximize the log
probability of a set of human demonstrations.
While this strategy has led to markedly improved performance, there is still a misalignment between
this fine-tuning objective—maximizing the likelihood of human-written text—and what we care
about—generating high-quality outputs as determined by humans. This misalignment has several
causes: the maximum likelihood objective has no distinction between important errors (e.g. making
up facts [ 41]) and unimportant errors (e.g. selecting the precise word from a set of synonyms); models
∗This was a joint project of the OpenAI Reflection team. Author order was randomized amongst {LO, JW,
DZ, NS}; CV and RL were full-time contributors for most of the duration. PC is the team lead.
2Samples from all of our models can be viewed on our website.
3We provide inference code for our 1.3B models and baselines, as well as a model card and our human
feedback dataset with over 64k summary comparisons, here.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. |
10.1038.s41467-024-46715-9.pdf | Article https://doi.org/10.1038/s41467-024-46715-9
High-throughput prediction of protein
conformational distributions withsubsampled AlphaFold2
Gabriel Monteiro da Silva1,J e n n i f e rY .C u i1,D a v i dC .D a l g a r n o2,
George P. Lisi1,3& Brenda M. Rubenstein1,3
This paper presents an innovative appro ach for predicting the relative popu-
lations of protein conformations usi ng AlphaFold 2, an AI-powered method
that has revolutionized biology by enab ling the accurate pred iction of protein
structures. While AlphaFold 2 has sho wn exceptional accuracy and speed, it is
designed to predict proteins ’ground state conformations and is limited in its
ability to predict conformational la ndscapes. Here, we demonstrate how
AlphaFold 2 can directly predict the rel ative populations of different protein
conformations by subsampling multip le sequence alignments. We tested our
method against nuclear magnetic resona nce experiments on two proteins with
drastically different amounts of avail able sequence data, Abl1 kinase and the
granulocyte-macrophage co lony-stimulating factor , and predicted changes in
their relative state populations with m ore than 80% accuracy. Our subsampling
approach worked best when used to qualitatively predict the effects ofmutations or evolution on the conform ational landscape and well-populated
states of proteins. It thus offers a fast a nd cost-effective way to predict the
relative populations of pro tein conformations at even single-point mutation
resolution, making it a useful tool for pharmacology, analysis of experimentalresults, and predicting evolution.
Proteins are essential biomolecules that carry out a wide range of
functions in living organisms. Understanding their three-dimensional
structures is critical for elucidating their functions and designing drugsthat target them
1. Historically, experimental techniques such as X-ray
crystallography, nuclear magnetic resonance (NMR) spectroscopy,and electron microscopy have been used to determine proteinstructures
2–4. However, these methods can be time-consuming, tech-
nically challenging, and expensive, and may not work for all proteins5.
To meet this challenge, ab initio structure prediction methods, whichuse computational algorithms to predict protein structures from theiramino acid sequences, have been developed
6. For many years, ab initio
structure prediction methods have relied on physics-based algorithmsto predict stable protein structures
7. Although successful, these
methods are challenged by larger and more complex proteins8.The recent development of machine learning algorithms has
significantly improved the speed of protein structure prediction9,10.
One of the most remarkable achievements in this area is the AlphaFold2 (AF2) engine developed by DeepMind, which uses a deep neuralnetwork to predict ground state protein structures from amino acidsequences
11,12. AlphaFold 2 was trained using large amounts of
experimental data and incorporates co-evolutionary information frommassive metagenomic databases
11. Its accuracy has revolutionized the
field of protein structure prediction11,13,14, opening up new possibilities
for drug discovery and basic research with clear consequences forhuman health
15,16.
However, a series of studies have found that the default AF2
algorithm is limited in its capacity to predict alternative protein con-
formations and the effects of sequence variants17,18. Although AF2 ’sReceived: 3 August 2023
Accepted: 28 February 2024
Check for updates
1Brown University Department of Molecular and Cell Biology and Biochemistry, Providence, RI, USA.2Dalgarno Scienti fic LLC, Brookline, MA, USA.3Brown
University Department of Chemistry, Providence, RI, USA. e-mail: brenda_rubenstein@brown.edu
Nature Communications | (2024) 15:2464 11234567890():,;
1234567890():,; |
2401.13660.pdf | MambaByte: Token-free Selective State Space Model
Junxiong Wang Tushaar Gangavarapu Jing Nathan Yan Alexander M Rush
Cornell University
{jw2544,tg352,jy858,arush}@cornell.edu
Abstract
Token-free language models learn directly from raw bytes and remove the bias of
subword tokenization. Operating on bytes, however, results in significantly longer
sequences, and standard autoregressive Transformers scale poorly in such settings.
We experiment with MambaByte, a token-free adaptation of the Mamba state space
model, trained autoregressively on byte sequences. Our experiments indicate the
computational efficiency of MambaByte compared to other byte-level models. We
also find MambaByte to be competitive with and even outperform state-of-the-art
subword Transformers. Furthermore, owing to linear scaling in length, MambaByte
benefits from fast inference compared to Transformers. Our findings establish the
viability of MambaByte in enabling token-free language modeling.
0 10K 20K 30K 40K
Training step0.900.951.001.051.101.151.201.251.30Bits per byte
0 1 2 3 4 5 6
Training exa FLOPs
MegaByte-193M+177M (patch: 4)
MegaByte-193M+177M (patch: 8)Gated-S4D-368M
MambaByte-353MTransformer-361M
Figure 1: Benchmarking byte-level models with a fixed parameter budget. Language modeling
results on PG19 ( 8,192consecutive bytes), comparing the standard Transformer [Vaswani et al.,
2017, Su et al., 2021], MegaByte Transformer [Yu et al., 2023], gated diagonalized S4 [Mehta et al.,
2023], and MambaByte. (Left) Model loss over training step. (Right) FLOP-normalized training cost.
MambaByte reaches Transformer loss in less than one-third of the compute budget.
1 Introduction
When defining a language model, a base tokenization is typically used—either words [Bengio et al.,
2000], subwords [Schuster and Nakajima, 2012, Sennrich et al., 2015, Wu et al., 2016, Wang et al.,
Copyright 2024 by the author(s).arXiv:2401.13660v1 [cs.CL] 24 Jan 2024 |
1905.13678.pdf | Learning Sparse Networks Using Targeted Dropout
Aidan N. Gomez1,2,3Ivan Zhang2
Siddhartha Rao Kamalakara2Divyam Madaan2
Kevin Swersky1Yarin Gal3Geoffrey E. Hinton1
1Google Brain2for.ai3Department of Computer Science
University of Oxford
Abstract
Neural networks are easier to optimise when they have many more weights than
are required for modelling the mapping from inputs to outputs. This suggests a
two-stage learning procedure that first learns a large net and then prunes away con-
nections or hidden units. But standard training does not necessarily encourage nets
to be amenable to pruning. We introduce targeted dropout, a method for training a
neural network so that it is robust to subsequent pruning. Before computing the
gradients for each weight update, targeted dropout stochastically selects a set of
units or weights to be dropped using a simple self-reinforcing sparsity criterion and
then computes the gradients for the remaining weights. The resulting network is
robust to post hoc pruning of weights or units that frequently occur in the dropped
sets. The method improves upon more complicated sparsifying regularisers while
being simple to implement and easy to tune.
1 Introduction
Neural networks are a powerful class of models that achieve the state-of-the-art on a wide range of
tasks such as object recognition, speech recognition, and machine translation. One reason for their
success is that they are extremely flexible models because they have a large number of learnable
parameters. However, this flexibility can lead to overfitting, and can unnecessarily increase the
computational and storage requirements of the network.
There has been a large amount of work on developing strategies to compress neural networks. One
intuitive strategy is sparsification : removing weights or entire units from the network. Sparsity can
be encouraged during learning by the use of sparsity-inducing regularisers, like L1orL0penalties. It
can also be imposed by post hoc pruning, where a full-sized network is trained, and then sparsified
according to some pruning strategy. Ideally, given some measurement of task performance, we
would prune the weights or units that provide the least amount of benefit to the task. Finding
the optimal set is, in general, a difficult combinatorial problem, and even a greedy strategy would
require an unrealistic number of task evaluations, as there are often millions of parameters. Common
pruning strategies therefore focus on fast approximations, such as removing weights with the smallest
magnitude [ 12], or ranking the weights by the sensitivity of the task performance with respect to
the weights, and then removing the least-sensitive ones [ 22]. The hope is that these approximations
correlate well with task performance, so that pruning results in a highly compressed network while
causing little negative impact to task performance, however this may not always be the case.
Our approach is based on the observation that dropout regularisation [ 16,32] itself enforces sparsity
tolerance during training, by sparsifying the network with each forward pass. This encourages the
Preprint. Under review.arXiv:1905.13678v5 [cs.LG] 9 Sep 2019 |
10.1101.2021.02.12.430858.pdf | MSA Transformer
Roshan Rao1 2Jason Liu3Robert Verkuil3Joshua Meier3
John F. Canny1Pieter Abbeel1Tom Sercu3Alexander Rives3 4
Abstract
Unsupervised protein language models trained
across millions of diverse sequences learn struc-
ture and function of proteins. Protein language
models studied to date have been trained to per-
form inference from individual sequences. The
longstanding approach in computational biology
has been to make inferences from a family of evo-
lutionarily related sequences by fitting a model
to each family independently. In this work we
combine the two paradigms. We introduce a pro-
tein language model which takes as input a set
of sequences in the form of a multiple sequence
alignment. The model interleaves row and column
attention across the input sequences and is trained
with a variant of the masked language modeling
objective across many protein families. The per-
formance of the model surpasses current state-of-
the-art unsupervised structure learning methods
by a wide margin, with far greater parameter effi-
ciency than prior state-of-the-art protein language
models.
1. Introduction
Unsupervised models learn protein structure from patterns
in sequences. Sequence variation within a protein fam-
ily conveys information about the structure of the protein
(Yanofsky et al., 1964; Altschuh et al., 1988; G¨obel et al.,
1994). Since evolution is not free to choose the identity of
amino acids independently at sites that are in contact in the
folded three-dimensional structure, patterns are imprinted
onto the sequences selected by evolution. Constraints on the
structure of a protein can be inferred from patterns in related
sequences. The predominant unsupervised approach is to
fit a Markov Random Field in the form of a Potts Model to
a family of aligned sequences to extract a coevolutionary
1UC Berkeley2Work performed during internship at FAIR.
3Facebook AI Research4New York University. Code and weights
available at https://github.com/facebookresearch/
esm. Correspondence to: Roshan Rao <rmrao@berkeley.edu >,
Alexander Rives <arives@fb.com>.
Column Attention
Row AttentionUntied Row Attention
Tied Row Attention
Row
AttentionColumn
AttentionFeed
Forward
LayerNormLayerNormLayerNormFigure 1. Left: Sparsity structure of the attention. By constraining
attention to operate over rows and columns, computational cost
is reduced from O(M2L2)toO(LM2) +O(ML2)where Mis
the number of rows and Lthe number of columns in the MSA.
Middle: Untied row attention uses different attention maps for
each sequence in the MSA. Tied row attention uses a single atten-
tion map for all sequences in the MSA, thereby constraining the
contact structure. Ablation studies consider the use of both tied
and untied attention. The final model uses tied attention. Right:
A single MSA Transformer block. The depicted architecture is
from the final model, some ablations alter the ordering of row and
column attention.
signal (Lapedes et al., 1999; Thomas et al., 2008; Weigt
et al., 2009).
A new line of work explores unsupervised protein language
models (Alley et al., 2019; Rives et al., 2020; Heinzinger
et al., 2019; Rao et al., 2019). This approach fits large
neural networks with shared parameters across millions of
diverse sequences, rather than fitting a model separately
to each family of sequences. At inference time, a single
forward pass of an end-to-end model replaces the multi-
stage pipeline, involving sequence search, alignment, and
model fitting steps, standard in bioinformatics. Recently,
promising results have shown that protein language models
learn secondary structure, long-range contacts, and function
via the unsupervised objective (Rives et al., 2020), making
them an alternative to the classical pipeline. While small and
recurrent models fall well short of state-of-the-art (Rao et al.,
2019), the internal representations of very large transformer
models are competitive with Potts models for unsupervised
structure learning (Rives et al., 2020; Rao et al., 2021).
Potts models have an important advantage over protein lan-. CC-BY-NC-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted February 13, 2021. ; https://doi.org/10.1101/2021.02.12.430858doi: bioRxiv preprint |
1911.12360.pdf | Published as a conference paper at ICLR 2021
HOW MUCH OVER-PARAMETERIZATION ISSUFFI-
CIENT TO LEARN DEEPRELU N ETWORKS ?
Zixiang Chen:˚, Yuan Cao:˚, Difan Zou:˚, Quanquan Gu:
:Department of Computer Science, University of California, Los Angles
{chenzx19,yuancao,knowzou,qgu}@cs.ucla.edu
ABSTRACT
A recent line of research on deep learning focuses on the extremely over-
parameterized setting, and shows that when the network width is larger than
a high degree polynomial of the training sample size nand the inverse of the target
errorϵ´1, deep neural networks learned by (stochastic) gradient descent enjoy
nice optimization and generalization guarantees. Very recently, it is shown that
under certain margin assumptions on the training data, a polylogarithmic width
condition suffices for two-layer ReLU networks to converge and generalize (Ji
and Telgarsky, 2020). However, whether deep neural networks can be learned
with such a mild over-parameterization is still an open question. In this work, we
answer this question affirmatively and establish sharper learning guarantees for
deep ReLU networks trained by (stochastic) gradient descent. In specific, under
certain assumptions made in previous work, our optimization and generalization
guarantees hold with network width polylogarithmic in nandϵ´1. Our results
push the study of over-parameterized deep neural networks towards more practical
settings.
1 I NTRODUCTION
Deep neural networks have become one of the most important and prevalent machine learning models
due to their remarkable power in many real-world applications. However, the success of deep learning
has not been well-explained in theory. It remains mysterious why standard optimization algorithms
tend to find a globally optimal solution, despite the highly non-convex landscape of the training loss
function. Moreover, despite the extremely large amount of parameters, deep neural networks rarely
over-fit, and can often generalize well to unseen data and achieve good test accuracy. Understanding
these mysterious phenomena on the optimization and generalization of deep neural networks is one
of the most fundamental problems in deep learning theory.
Recent breakthroughs have shed light on the optimization and generalization of deep neural networks
(DNNs) under the over-parameterized setting, where the hidden layer width is extremely large (much
larger than the number of training examples). It has been shown that with the standard random
initialization, the training of over-parameterized deep neural networks can be characterized by a
kernel function called neural tangent kernel (NTK) (Jacot et al., 2018; Arora et al., 2019b). In
the neural tangent kernel regime (or lazy training regime (Chizat et al., 2019)), the neural network
function behaves similarly as its first-order Taylor expansion at initialization (Jacot et al., 2018;
Lee et al., 2019; Arora et al., 2019b; Cao and Gu, 2019), which enables feasible optimization and
generalization analysis. In terms of optimization, a line of work (Du et al., 2019b; Allen-Zhu et al.,
2019b; Zou et al., 2019; Zou and Gu, 2019) proved that for sufficiently wide neural networks,
(stochastic) gradient descent (GD/SGD) can successfully find a global optimum of the training loss
function. For generalization, Allen-Zhu et al. (2019a); Arora et al. (2019a); Cao and Gu (2019)
established generalization bounds of neural networks trained with (stochastic) gradient descent, and
showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space
(RKHS) or the corresponding random feature function class.
Although existing results in the neural tangent kernel regime have provided important insights
into the learning of deep neural networks, they require the neural network to be extremely wide.
*Equal contribution.
1arXiv:1911.12360v4 [cs.LG] 30 Dec 2021 |
2309.00754.pdf | EFFICIENT RLHF: R EDUCING THE MEMORY
USAGE OF PPO
Michael Santacroce, Yadong Lu, Han Yu, Yuanzhi Li, Yelong Shen
Microsoft
{misantac,yadonglu,hanyu,yuanzhili,yelong.shen}@microsoft.com
ABSTRACT
Reinforcement Learning with Human Feedback (RLHF) has revolutionized lan-
guage modeling by aligning models with human preferences. However, the RL
stage, Proximal Policy Optimization (PPO), requires over 3x the memory of Su-
pervised Fine-Tuning (SFT), making it infeasible to use for most practitioners. To
address this issue, we present a comprehensive analysis the memory usage, perfor-
mance, and training time of memory-savings techniques for PPO. We introduce
Hydra-RLHF by first integrating the SFT and Reward models and then dynamically
turning LoRA "off" during training. Our experiments show: 1. Using LoRA during
PPO reduces its memory usage to be smaller than SFT while improving alignment
across four public benchmarks, and 2. Hydra-PPO reduces the latency per sam-
ple of LoRA-PPO by up to 65% while maintaining its performance. Our results
demonstrate that Hydra-PPO is a simple and promising solution for enabling more
widespread usage of RLHF.
1 Introduction
Since ChatGPT, GPT-4, and Llama-2 family models entered the public sphere, they have impressed
users with their ability to be helpful assistants for a surprising number of tasks [ 1,2,3,4,5]. One key
to their success, along with many other foundation models [ 6], is model alignment through RLHF.
Training a massive language model results in a network with a large amount of knowledge, however,
it is not trained to discriminate within that knowledge, which could cause undesired behaviour and
possibly lead to societal harm [ 7]. Alignment aims to solve this issue by adjusting the model’s
behaviour and has become an integral part for creating safe and controllable foundation models [ 8,9].
While RLHF improves model alignment it is limited in usage, being both highly complex and
demanding a massive amount of memory when loading and training multiple models during PPO
[10,11]. Because the use of RLHF is in its infancy, there is a strong need to evaluate its variations in
terms of speed and performance.
To address this need, we delve into the training process and model architectures of standard RLHF-
PPO. Through this investigation, we identify substantial opportunities for memory/computation cost
reduction through the implementation of model-sharing between Reference/Reward Models and
Actor/Critic Models.
Given these findings, we propose Hydra-PPO to reduce the number of trained and static models in
memory during PPO. We perform run-time and performance comparisons to show these memory
savings can then be utilized to increase the training batch size, reducing the per-sample latency of
PPO by up to 65%.
Preprint.arXiv:2309.00754v1 [cs.LG] 1 Sep 2023 |
121-Testing-Manifold.pdf | JOURNAL OF THE
AMERICAN MATHEMATICAL SOCIETY
Volume 29, Number 4, October 2016, Pages 983–1049
http://dx.doi.org/10.1090/jams/852Article electronically published on February 9, 2016
TESTING THE MANIFOLD HYPOTHESIS
CHARLES FEFFERMAN, SANJOY MITTER, AND HARIHARAN NARAYANAN
Contents
1. Introduction 984
1.1. Definitions 9881.2. Constants 9881.3.d-planes 988
1.4. Patches 988
1.5. Imbedded manifolds 9891.6. A note on controlled constants 9922. Literature on manifold learning 992
3. Sample complexity of manifold fitting 993
3.1. Sketch of the proof of Theorem 1 9944. Proof of Theorem 1 9954.1. A bound on the size of an ϵ-net 995
4.2. Tools from empirical processes 996
5. Fitting kaffine subspaces of dimension d 1001
6. Dimension reduction 10037. Overview of the algorithm for testing the manifold hypothesis 1005
8. Disc bundles 1007
9. A key result 100710. Constructing cylinder packets 101411. Constructing a disc bundle possessing the desired characteristics 1015
11.1. Approximate squared distance functions 1015
11.2. The disc bundles constructed from approximate-squared-distance
functions are good 1017
12. Constructing an exhaustive family of disc bundles 1020
13. Finding good local sections 1024
13.1. Basic convex sets 102413.2. Preprocessing 102613.3. Convex program 1026
13.4. Complexity 1027
14. Patching local sections together 102915. The reach of the final manifold M
fin 1031
Received by the editors March 9, 2014 and, in revised form, February 3, 2015 and August 9,
2015.
2010Mathematics Subject Classification. Primary 62G08, 62H15; Secondary 55R10, 57R40.
The first author was supported by NSF grant DMS 1265524, AFOSR grant FA9550-12-1-0425
and U.S.-Israel Binational Science Foundation grant 2014055.
The second author was supported by NSF grant EECS-1135843.
c⃝2016 American Mathmatical Society
983
Licensed to Mass Inst of Tech. Prepared on Wed Mar 22 10:17:40 EDT 2017 for download from IP 18.9.61.112.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use |
Moving-structural-biology-forward-together-cell.pdf | Leading Edge
Editorial
Moving structural biology forward together
The field of structural biology has undergone revolutions in the
past decades. Technological advances have pushed the bound-aries of what is possible. With that, structural biologists today
can solve more physiologically relevant structures than they
could in the past, and often at higher resolution. These structuresof molecules and macromolecular complexes have provided
foundational knowledge from which key mechanistic, functional,
and biological insights have emerged. It is an exciting time! Aspart of Cell’s 50
thanniversary, this issue spotlights structural
biology, celebrating the progress and breakthroughs of the
past and highlighting future directions of research with Reviews,Commentaries, a Perspective, and first-person viewpoints fromscientists.
When Cell launched, X-ray crystallography was the primary
technique used to solve structures of molecules, with only adozen protein structures revealed at that time. As technology
developed, scientists could solve more structures, including
those of large complexes, with increasingly higher resolution. Adecade ago, advances in single-particle cryo-electron micro-
scopy (cryo-EM) caused a revolution in the field where structures
could be determined at close to atomic resolution without theneed for crystallization, making it possible to see molecules,
especially membrane proteins, that previously were difficult to
study. Advances in cryo-EM now also bring insights into proteindynamics, once the sole province of nuclear magnetic reso-nance spectroscopy (NMR). More recently, cryo-electron to-
mography (cryo-ET), AlphaFold in all its variations, and emerging
integrative approaches are enabling us to study molecules withsub-nanometer resolution in their native environment, to visu-
alize 3D architecture of organelles inside whole cells and tissues,
and to predict protein structures. In a Review in this issue,Benjamin Engel and colleagues provide an overview of recent
technological development facilitating biological research
across space and time. While challenges remain, we see tremen-dous possibilities to utilize individual and combined technologies
to tackle important biological questions.
Structures provide a specific way to understand biology. Visu-
alizing structures offers molecular and functional insights thatcannot be obtained otherwise. From the first structure paper
published in Cell showing nucleosomes organizing DNA in
1975, to T cell receptor structures and their functions, and tostructures of CRISPR-Cas systems, structural biology provides
foundational knowledge that shapes our understanding of mole-
cules and their functions in biology. Moreover, by utilizingemerging and integrative technologies, we gain a level of insight
that can challenge previous dogmas or shift scientific concepts
considerably. Take studies on ribosomes as an example. X-raycrystal structures of ribosomal subunits together with otherdata provided evidence to support the idea that ribosomes are
not typical protein catalysts but rather RNA catalysts. More
recently, in situ structural analyses revealed new dimensions to
protein synthesis with the finding that the distribution of elonga-
tion states of ribosomes inside cells differs from what was pre-dicted based on models derived from in vitro analyses. Looking
ahead to the next decade, we anticipate that structural analysiswill not only explain individual molecules at high molecular detail
but also reveal functional modules in situ , helping us ultimately
understand how cells work. In this issue, Martin Beck and col-leagues share their perspectives on the future direction of struc-
ture biology and further explore the concept of digital twins,
where the use of virtual reality to visualize cells in four dimensionsmarries spatial and temporal information to understand cells
across time. Also in this issue, Mark Murcko and James Fraser
remind us in a Commentary that structural biology, as powerfulas it is, has limitations that should not be overlooked. They high-light fundamental challenges in defining ‘‘ground truth’’ and sug-
gest new benchmarks for structure biology.
In addition to driving fundamental research forward, structures
have been utilized to develop drugs, antibodies, and vaccines as
well as for de novo protein design. Structural insights on viral life
cycles and host infection by viruses, such as human immunode-ficiency virus (HIV) and SARS-CoV-2, have facilitated therapeutic
development and advanced our understanding of drug resis-
tance. In this issue, Edward Blake Miller and colleaguescomment on how predicted structures can be confidently
applied to drug design challenges by quantifying the accuracy
of predicted structures. Additionally, Tanja Kortemme reviewsthe power of AlphaFold and its offspring, as well as modelingmore generally, in de novo protein design to understand molec-
ular and cellular functions, to reengineer proteins with new
functions, and to promote practical applications in industryand medicine. We share the excitement that protein design
and drug development are being transformed by artificial-intelli-
gence-based methods and models.
Behind every scientific discovery, there are scientists and their
stories. We and our colleagues across Cell Press, including Mo-
lecular Cell ,Structure , and Trends in Biochemical Sciences , are
taking the opportunity to highlight some of those people. In their
current February 1
stissues, Molecular Cell features a Voices
piece from experts in the field discussing how to use AlphaFoldin biological research; Structure includes a Voices piece from
group leaders in India reflecting on the status of cryo-EM
research in their country; and Trends in Biochemical Sciences fo-
cuses on structural biology broadly, including an interview withresearchers on how to define and share results. In 1998, Trends
in Biochemical Sciences published a Reflections article from Jia-
Huai Wang telling the story of insulin structure determination byDorothy Hodgkin’s group in Oxford and the Beijing Insulin Group
in China and how Hodgkin influenced a generation of Chinese
structural biologists. Now, in a Conversation with Mingjie Zhang,Beili Wu, and Zihe Rao in the current issue of Cell, we hear about
their excitement for structural biology research in China and how
the spirit of that historic communication and collaboration thrives
in research today. As time goes by, what is unchanged is theappreciation of and advocacy for continuing collaborations
among scientific groups around the world, which is essential
ll
Cell187, February 1, 2024 ª2024 Elsevier Inc. 511 |
2303.11366.pdf | Reflexion: Language Agents with
Verbal Reinforcement Learning
Noah Shinn
Northeastern University
noahshinn024@gmail.comFederico Cassano
Northeastern University
cassano.f@northeastern.edu
Edward Berman
Northeastern University
berman.ed@northeastern.eduAshwin Gopinath
Massachusetts Institute of Technology
agopi@mit.edu
Karthik Narasimhan
Princeton University
karthikn@princeton.eduShunyu Yao
Princeton University
shunyuy@princeton.edu
Abstract
Large language models (LLMs) have been increasingly used to interact with exter-
nal environments (e.g., games, compilers, APIs) as goal-driven agents. However,
it remains challenging for these language agents to quickly and efficiently learn
from trial-and-error as traditional reinforcement learning methods require exten-
sive training samples and expensive model fine-tuning. We propose Reflexion , a
novel framework to reinforce language agents not by updating weights, but in-
stead through linguistic feedback. Concretely, Reflexion agents verbally reflect
on task feedback signals, then maintain their own reflective text in an episodic
memory buffer to induce better decision-making in subsequent trials. Reflexion is
flexible enough to incorporate various types (scalar values or free-form language)
and sources (external or internally simulated) of feedback signals, and obtains
significant improvements over a baseline agent across diverse tasks (sequential
decision-making, coding, language reasoning). For example, Reflexion achieves a
91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previ-
ous state-of-the-art GPT-4 that achieves 80%. We also conduct ablation and analysis
studies using different feedback signals, feedback incorporation methods, and agent
types, and provide insights into how they affect performance. We release all code,
demos, and datasets at https://github.com/noahshinn024/reflexion .
1 Introduction
Recent works such as ReAct [ 30], SayCan [ 1], Toolformer [ 22], HuggingGPT [ 23], generative
agents [ 19], and WebGPT [ 17] have demonstrated the feasibility of autonomous decision-making
agents that are built on top of a large language model (LLM) core. These methods use LLMs to
generate text and ‘actions‘ that can be used in API calls and executed in an environment. Since
they rely on massive models with an enormous number of parameters, such approaches have been
so far limited to using in-context examples as a way of teaching the agents, since more traditional
optimization schemes like reinforcement learning with gradient descent require substantial amounts
of compute and time.
Preprint. Under review.arXiv:2303.11366v4 [cs.AI] 10 Oct 2023 |
2203.15556.pdf | Training Compute-Optimal Large Language Models
Jordan Hoffmann★, Sebastian Borgeaud★, Arthur Mensch★, Elena Buchatskaya, Trevor Cai, Eliza Rutherford,
Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland,
Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan,
Erich Elsen, Jack W. Rae, Oriol Vinyals and Laurent Sifre★
★Equal contributions
Weinvestigatetheoptimalmodelsizeandnumberoftokensfortrainingatransformerlanguagemodel
under a given compute budget. We find that current large language models are significantly under-
trained, a consequence of the recent focus on scaling language models whilst keeping the amount of
trainingdataconstant. Bytrainingover400languagemodelsrangingfrom70milliontoover16billion
parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and
the number of training tokens should be scaled equally: for every doubling of model size the number
of training tokens should also be doubled. We test this hypothesis by training a predicted compute-
optimal model, Chinchilla , that uses the same compute budget as Gopherbut with 70B parameters and
4more more data. Chinchilla uniformly and significantly outperforms Gopher(280B), GPT-3 (175B),
Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks.
This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly
facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of
67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.
1. Introduction
Recently a series of Large Language Models (LLMs) have been introduced (Brown et al., 2020; Lieber
et al., 2021; Rae et al., 2021; Smith et al., 2022; Thoppilan et al., 2022), with the largest dense
language models now having over 500 billion parameters. These large autoregressive transformers
(Vaswani et al., 2017) have demonstrated impressive performance on many tasks using a variety of
evaluation protocols such as zero-shot, few-shot, and fine-tuning.
The compute and energy cost for training large language models is substantial (Rae et al., 2021;
Thoppilan et al., 2022) and rises with increasing model size. In practice, the allocated training
compute budget is often known in advance: how many accelerators are available and for how long
we want to use them. Since it is typically only feasible to train these large models once, accurately
estimating the best model hyperparameters for a given compute budget is critical (Tay et al., 2021).
Kaplan et al. (2020) showed that there is a power law relationship between the number of
parameters in an autoregressive language model (LM) and its performance. As a result, the field has
beentraininglargerandlargermodels,expectingperformanceimprovements. Onenotableconclusion
in Kaplan et al. (2020) is that large models should not be trained to their lowest possible loss to be
compute optimal. Whilst we reach the same conclusion, we estimate that large models should be
trained for many more training tokens than recommended by the authors. Specifically, given a 10
increase computational budget, they suggests that the size of the model should increase 55while
the number of training tokens should only increase 1.8 . Instead, we find that model size and the
number of training tokens should be scaled in equal proportions.
Following Kaplan et al. (2020) and the training setup of GPT-3 (Brown et al., 2020), many of the
recently trained large models have been trained for approximately 300 billion tokens (Table 1), in
line with the approach of predominantly increasing model size when increasing compute.
Corresponding authors: {jordanhoffmann|sborgeaud|amensch|sifre}@deepmind.com
©2023 DeepMind. All rights reservedarXiv:2203.15556v1 [cs.CL] 29 Mar 2022 |
2304.15004.pdf | Are Emergent Abilities of Large Language Models a
Mirage?
Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo
Computer Science, Stanford University
Abstract
Recent work claims that large language models display emergent abilities , abil-
ities not present in smaller-scale models that are present in larger-scale models.
What makes emergent abilities intriguing is two-fold: their sharpness , transition-
ing seemingly instantaneously from not present to present, and their unpredictabil-
ity, appearing at seemingly unforeseeable model scales. Here, we present an al-
ternative explanation for emergent abilities: that for a particular task and model
family, when analyzing fixed model outputs, emergent abilities appear due the
researcher’s choice of metric rather than due to fundamental changes in model
behavior with scale. Specifically, nonlinear or discontinuous metrics produce ap-
parent emergent abilities, whereas linear or continuous metrics produce smooth,
continuous, predictable changes in model performance. We present our alternative
explanation in a simple mathematical model, then test it in three complementary
ways: we (1) make, test and confirm three predictions on the effect of metric
choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abil-
ities, (2) make, test and confirm two predictions about metric choices in a meta-
analysis of emergent abilities on BIG-Bench; and (3) show how to choose metrics
to produce never-before-seen seemingly emergent abilities in multiple vision tasks
across diverse deep networks. Via all three analyses, we provide evidence that al-
leged emergent abilities evaporate with different metrics or with better statistics,
and may not be a fundamental property of scaling AI models.
1 Introduction
Emergent properties of complex systems have long been studied across disciplines, from physics to
biology to mathematics. The idea of emergence was popularized by Nobel Prize-winning physicist
P.W. Anderson’s “More Is Different” [1], which argues that as the complexity of a system increases,
new properties may materialize that cannot be predicted even from a precise quantitative understand-
ing of the system’s microscopic details. Recently, the idea of emergence gained significant attention
in machine learning due to observations that large language models (LLMs) such as GPT [3], PaLM
[6] and LaMDA [30] exhibit so-called “emergent abilities” [33, 8, 28, 3] (Fig. 1).
The term “emergent abilities of LLMs” was recently and crisply defined as “abilities that are not
present in smaller-scale models but are present in large-scale models; thus they cannot be predicted
by simply extrapolating the performance improvements on smaller-scale models” [33]. Such emer-
gent abilities were first discovered in the GPT-3 family [3]. Subsequent work emphasized the discov-
ery, writing that “[although model] performance is predictable at a general level, performance on a
specific task can sometimes emerge quite unpredictably and abruptly at scale” [8]. These quotations
collectively identify the two defining properties of emergent abilities in LLMs:
1.Sharpness , transitioning seemingly instantaneously from not present to present
Preprint. Under review.arXiv:2304.15004v2 [cs.AI] 22 May 2023 |
2309.01933.pdf | PROVABLY SAFE SYSTEMS :
THE ONLY PATH TO CONTROLLABLE AGI
Max Tegmark
Department of Physics
Insitute for AI & Fundamental Interactions
Massachusetts Institute of Technology
Cambridge, MA 02139
Steve Omohundro
Beneficial AI Research
Palo Alto, CA 94301
September 6, 2023
ABSTRACT
We describe a path to humanity safely thriving with powerful Artificial General Intelligences (AGIs)
by building them to provably satisfy human-specified requirements. We argue that this will soon be
technically feasible using advanced AI for formal verification and mechanistic interpretability. We
further argue that it is the only path which guarantees safe controlled AGI. We end with a list of
challenge problems whose solution would contribute to this positive outcome and invite readers to
join in this work.
Keywords Artificial Intelligence ·AI Safety ·Provably Safe Systems
1 Introduction
“Once the machine thinking method had started,
it would not take long to outstrip our feeble powers.
At some stage therefore we should have to expect the machines to take control”
Alan Turing 1951 [35]
AGI [91] safety is of the utmost urgency, since corporations and research labs are racing to build AGI despite promi-
nent AI researchers and business leaders warning that it may lead to human extinction [11]. While governments are
drafting AI regulations, there’s little indication that they will be sufficient to resist competitive pressures and prevent
the creation of AGI. Median estimates on the forecasting platform Metaculus of the date of AGI’s creation have plum-
meted over the past few years from many decades away to 2027 [25] or 2032 [24] depending on definitions, with
superintelligence expected to follow a few years later [23].
Is Alan Turing correct that we now “have to expect the machines to take control” ? If AI safety research remains at
current paltry levels, this seems likely. Considering the stakes, the AI safety effort is absurdly small in terms of both
funding and the number of people. One analysis [73] estimates that less than $150 million will be spent on AI Safety
research this year, while, for example, $63 billion will be spent on cosmetic surgery [14] and $1 trillion on cigarettes
[13]. Another analyst estimates [10] that only about one in a thousand AI researchers works on safety.
Much of the current AI safety work is focused on “alignment” which attempts to fine-tune deep neural networks so
that their behavior becomes more aligned with human preferences. While this is valuable, we believe it is inadequate
for human safety, especially given the profusion of open-source AI that can be used maliciously. In the face of the
possibility of human extinction, we must adopt a “security mindset” [30] and rapidly work to create designs which
will be safe also against adversarial AGIs. With a security mindset, we must design safety both into AGIs and also
into the physical, digital, and social infrastructure that they interact with [5]. AGI computations are only dangerous
for us when they lead to harmful actions in the world.arXiv:2309.01933v1 [cs.CY] 5 Sep 2023 |
few-shot-clustering.pdf | Large Language Models Enable Few-Shot Clustering
Vijay Viswanathan1, Kiril Gashteovski2,
Carolin Lawrence2, Tongshuang Wu1, Graham Neubig1, 3
1Carnegie Mellon University,2NEC Laboratories Europe,3Inspired Cognition
Abstract
Unlike traditional unsupervised clustering,
semi-supervised clustering allows users to pro-
vide meaningful structure to the data, which
helps the clustering algorithm to match the
user’s intent. Existing approaches to semi-
supervised clustering require a significant
amount of feedback from an expert to improve
the clusters. In this paper, we ask whether
a large language model can amplify an ex-
pert’s guidance to enable query-efficient, few-
shot semi-supervised text clustering. We show
that LLMs are surprisingly effective at im-
proving clustering. We explore three stages
where LLMs can be incorporated into cluster-
ing: before clustering (improving input fea-
tures), during clustering (by providing con-
straints to the clusterer), and after clustering
(using LLMs post-correction). We find incor-
porating LLMs in the first two stages can rou-
tinely provide significant improvements in clus-
ter quality, and that LLMs enable a user to
make trade-offs between cost and accuracy to
produce desired clusters. We release our code
and LLM prompts for the public to use.1
1 Introduction
Unsupervised clustering aims to do an impossible
task: organize data in a way that satisfies a domain
expert’s needs without any specification of what
those needs are. Clustering, by its nature, is fun-
damentally an underspecified problem. According
to Caruana (2013), this underspecification makes
clustering “probably approximately useless.”
Semi-supervised clustering, on the other hand,
aims to solve this problem by enabling the domain
expert to guide the clustering algorithm (Bae et al.,
2020). Prior works have introduced different types
of interaction between an expert and a clustering
algorithm, such as initializing clusters with hand-
picked seed points (Basu et al., 2002), specifying
1https://github.com/viswavi/
few-shot-clustering
LLM Traditional
Semi-Supervised
Clustering
LLM-Guided
Few-Shot
Clustering Figure 1: In traditional semi-supervised clustering, a
user provides a large amount of feedback to the clusterer.
In our approach, the user prompts an LLM with a small
amount of feedback. The LLM then generates a large
amount of pseudo-feedback for the clusterer.
pairwise constraints (Basu et al., 2004; Zhang et al.,
2019), providing feature feedback (Dasgupta and
Ng, 2010), splitting or merging clusters (Awasthi
et al., 2013), or locking one cluster and refining the
rest (Coden et al., 2017). These interfaces have all
been shown to give experts control of the final clus-
ters. However, they require significant effort from
the expert. For example, in a simulation that uses
split/merge, pairwise constraint, and lock/refine in-
teractions (Coden et al., 2017), it took between 20
and 100 human-machine interactions to get any
clustering algorithm to produce clusters that fit the
human’s needs. Therefore, for large, real-world
datasets with a large number of possible clusters,
the feedback cost required by interactive clustering
algorithms can be immense.
Building on a body of recent work that uses
Large Language Models (LLMs) as noisy simu-
lations of human decision-making (Fu et al., 2023;
Horton, 2023; Park et al., 2023), we propose a dif-
ferent approach for semi-supervised text clustering.
In particular, we answer the following research
question: Can an expert provide a few demonstra-
tions of their desired interaction (e.g., pairwise
constraints) to a large language model, then let the
LLM direct the clustering algorithm? |
10.1038.s41586-019-1724-z.pdf | 350 | Nature | Vol 575 | 14 November 2019
ArticleGrandmaster level in StarCraft II using
multi-agent reinforcement learning
Oriol Vinyals1,3*, Igor Babuschkin1,3, Wojciech M. Czarnecki1,3, Michaël Mathieu1,3,
Andrew Dudzik1,3, Junyoung Chung1,3, David H. Choi1,3, Richard Powell1,3, Timo Ewalds1,3,
Petko Georgiev1,3, Junhyuk Oh1,3, Dan Horgan1,3, Manuel Kroiss1,3, Ivo Danihelka1,3,
Aja Huang1,3, Laurent Sifre1,3, Trevor Cai1,3, John P. Agapiou1,3, Max Jaderberg1,
Alexander S. Vezhnevets1, Rémi Leblond1, Tobias Pohlen1, Valentin Dalibard1, David Budden1,
Yury Sulsky1, James Molloy1, Tom L. Paine1, Caglar Gulcehre1, Ziyu Wang1, Tobias Pfaff1,
Yuhuai Wu1, Roman Ring1, Dani Yogatama1, Dario Wünsch2, Katrina McKinney1, Oliver Smith1,
Tom Schaul1, Timothy Lillicrap1, Koray Kavukcuoglu1, Demis Hassabis1, Chris Apps1,3 &
David Silver1,3*
Many real-world applications require artificial agents to compete and coordinate
with other agents in complex environments. As a stepping stone to this goal, the
domain of StarCraft has emerged as an important challenge for artificial intelligence
research, owing to its iconic and enduring status among the most difficult
professional esports and its relevance to the real world in terms of its raw complexity
and multi-agent challenges. Over the course of a decade and numerous
competitions1–3, the strongest agents have simplified important aspects of the game,
utilized superhuman capabilities, or employed hand-crafted sub-systems4. Despite
these advantages, no previous agent has come close to matching the overall skill of
top StarCraft players. We chose to address the challenge of StarCraft using general-
purpose learning methods that are in principle applicable to other complex
domains: a multi-agent reinforcement learning algorithm that uses data from both
human and agent games within a diverse league of continually adapting strategies
and counter-strategies, each represented by deep neural networks5,6. We evaluated
our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three
StarCraft races and above 99.8% of officially ranked human players.
StarCraft is a real-time strategy game in which players balance high-
level economic decisions with individual control of hundreds of units.
This domain raises important game-theoretic challenges: it features a
vast space of cyclic, non-transitive strategies and counter-strate -
gies; discovering novel strategies is intractable with naive self-play
exploration methods; and those strategies may not be effective when
deployed in real-world play with humans. Furthermore, StarCraft
has a combinatorial action space, a planning horizon that extends
over thousands of real-time decisions, and imperfect information7.
Each game consists of tens of thousands of time-steps and thousands
of actions, selected in real-time throughout approximately ten minutes
of gameplay. At each step t , our agent AlphaStar receives an observation
ot that includes a list of all observable units and their attributes. This
information is imperfect; the game includes only opponent units seen
by the player’s own units, and excludes some opponent unit attributes
outside the camera view.Each action at is highly structured: it selects what action type, out of
several hundred (for example, move or build worker); who to issue that
action to, for any subset of the agent’s units; where to target, among
locations on the map or units within the camera view; and when to
observe and act next (Fig. 1a). This representation of actions results
in approximately 1026 possible choices at each step. Similar to human
players, a special action is available to move the camera view, so as to
gather more information.
Humans play StarCraft under physical constraints that limit their
reaction time and the rate of their actions. The game was designed with
those limitations in mind, and removing those constraints changes the
nature of the game. We therefore chose to impose constraints upon
AlphaStar: it suffers from delays due to network latency and compu -
tation time; and its actions per minute (APM) are limited, with peak
statistics substantially lower than those of humans (Figs. 2c, 3g for
performance analysis). AlphaStar’s play with this interface and these https://doi.org/10.1038/s41586-019-1724-z
Received: 30 August 2019
Accepted: 10 October 2019
Published online: 30 October 2019
1DeepMind, London, UK. 2Team Liquid, Utrecht, Netherlands. 3These authors contributed equally: Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik,
Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou,
Chris Apps, David Silver. *e-mail: vinyals@google.com; davidsilver@google.com |
2401.04056.pdf | A Minimaximalist Approach to
Reinforcement Learning from Human Feedback
Gokul Swamy1 *Christoph Dann2Rahul Kidambi2Zhiwei Steven Wu1Alekh Agarwal2
Abstract
We present Self-Play Preference Optimization
(SPO), an algorithm for reinforcement learning
from human feedback. Our approach is minimal-
istin that it does not require training a reward
model nor unstable adversarial training and is
therefore rather simple to implement. Our ap-
proach is maximalist in that it provably handles
non-Markovian, intransitive, and stochastic pref-
erences while being robust to the compounding
errors that plague offline approaches to sequen-
tial prediction. To achieve the preceding qual-
ities, we build upon the concept of a Minimax
Winner (MW), a notion of preference aggrega-
tion from the social choice theory literature that
frames learning from preferences as a zero-sum
game between two policies. By leveraging the
symmetry of this game, we prove that rather than
using the traditional technique of dueling two poli-
cies to compute the MW, we can simply have a
single agent play against itself while maintain-
ing strong convergence guarantees. Practically,
this corresponds to sampling multiple trajectories
from a policy, asking a rater or preference model
to compare them, and then using the proportion
of wins as the reward for a particular trajectory.
We demonstrate that on a suite of continuous con-
trol tasks, we are able to learn significantly more
efficiently than reward-model based approaches
while maintaining robustness to the intransitive
and stochastic preferences that frequently occur
in practice when aggregating human judgments.
1. Introduction
Reinforcement learning from human feedback (RLHF,
Christiano et al. (2017)) also known as preference-based
reinforcement learning (PbRL, Akrour et al. (2012); Wirth
*Work (mostly) completed while a Student Researcher at
Google Research.1Carnegie Mellon University2Google Research.
Correspondence to: Gokul Swamy <gswamy@cmu.edu>.
SamplingRLHF / PbRLTrain ClassifierPreferenceRLRLSPO
( ) = 1( ) = 0{ }1
1222112Figure 1: The standard pipeline (left) for preference-based
RL / RLHF involves training a reward model based on a
dataset of pairwise preferences and then optimizing it via
RL. We introduce SPO (right), an iterative method that
instead optimizes directly based on preference feedback pro-
vided by a rater or preference model, with each trajectory
getting a reward based on the proportion of other on-policy
trajectories it is preferred to. We prove and validate em-
pirically that this approach is more robust to intransitive,
non-Markovian, and noisy preferences than prior works.
et al. (2017); Sadigh et al. (2017); Ibarz et al. (2018); Lee
et al. (2021b;a); Sikchi et al. (2022)), is a technique for
policy optimization based on relative, rather than absolute,
feedback. Owing to the relative ease of providing compara-
tive feedback rather than absolute scores for agent behavior
for human raters (Miller, 1956), RLHF has been success-
fully applied across fields from robotics (Cakmak et al.,
2011; Tucker et al., 2020; Swamy et al., 2020; Bıyık et al.,
2020) to recommendation (De Gemmis et al., 2009; Ailon
& Mohri, 2010; Viappiani & Boutilier, 2010; Afsar et al.,
2022), to retrieval (Yue & Joachims, 2009). As of late,
RLHF has attracted renewed interest as a leading technique
for fine-tuning large language models (LLMs) (Ziegler et al.,
2020; Stiennon et al., 2020; Bai et al., 2022a; Ouyang et al.,
2022).
The predominantly studied approach to RLHF is via Reward-
based RLHF , a two-stage procedure. First, given pairs of
preferred and dis-preferred behavior, one trains a reward
1arXiv:2401.04056v1 [cs.LG] 8 Jan 2024 |
2301.11325.pdf | MusicLM: Generating Music From Text
Andrea Agostinelli* 1Timo I. Denk* 1
Zal´an Borsos1Jesse Engel1Mauro Verzetti1Antoine Caillon2Qingqing Huang1Aren Jansen1
Adam Roberts1Marco Tagliasacchi1Matt Sharifi1Neil Zeghidour1Christian Frank1
Abstract
We introduce MusicLM, a model for generating
high-fidelity music from text descriptions such as
“a calming violin melody backed by a distorted gui-
tar riff” . MusicLM casts the process of condi-
tional music generation as a hierarchical sequence-
to-sequence modeling task, and it generates music
at 24 kHz that remains consistent over several mi-
nutes. Our experiments show that MusicLM out-
performs previous systems both in audio quality
and adherence to the text descriptions. Moreover,
we demonstrate that MusicLM can be conditioned
on both text and a melody in that it can transform
whistled and hummed melodies according to the
style described in a text caption. To support fu-
ture research, we publicly release MusicCaps, a
dataset composed of 5.5k music-text pairs, with
rich text descriptions provided by human experts.
google-research.github.io/seanet/musiclm/examples
1. Introduction
Conditional neural audio generation covers a wide range of
applications, ranging from text-to-speech (Zen et al., 2013;
van den Oord et al., 2016) to lyrics-conditioned music ge-
neration (Dhariwal et al., 2020) and audio synthesis from
MIDI sequences (Hawthorne et al., 2022b). Such tasks are
facilitated by a certain level of temporal alignment between
the conditioning signal and the corresponding audio out-
put. In contrast, and inspired by progress in text-to-image
generation (Ramesh et al., 2021; 2022; Saharia et al., 2022;
Yu et al., 2022), recent work has explored generating audio
from sequence-wide, high-level captions (Yang et al., 2022;
Kreuk et al., 2022) such as “whistling with wind blowing” .
While generating audio from such coarse captions repre-
sents a breakthrough, these models remain limited to simple
acoustic scenes, consisting of few acoustic events over a
*Equal contribution1Google Research2IRCAM - Sorbonne
Universit ´e (work done while interning at Google). Correspondence
to: Christian Frank <chfrank@google.com >.period of seconds. Hence, turning a single text caption into
a rich audio sequence with long-term structure and many
stems, such as a music clip, remains an open challenge.
AudioLM (Borsos et al., 2022) has recently been proposed
as a framework for audio generation. Casting audio synthe-
sis as a language modeling task in a discrete representation
space, and leveraging a hierarchy of coarse-to-fine audio
discrete units (or tokens ), AudioLM achieves both high-
fidelity and long-term coherence over dozens of seconds.
Moreover, by making no assumptions about the content
of the audio signal, AudioLM learns to generate realistic
audio from audio-only corpora, be it speech or piano music,
without any annotation. The ability to model diverse signals
suggests that such a system could generate richer outputs
if trained on the appropriate data.
Besides the inherent difficulty of synthesizing high-quality
and coherent audio, another impeding factor is the scarcity
of paired audio-text data. This is in stark contrast with the
image domain, where the availability of massive datasets
contributed significantly to the remarkable image generation
quality that has recently been achieved (Ramesh et al., 2021;
2022; Saharia et al., 2022; Yu et al., 2022). Moreover, creat-
ing text descriptions of general audio is considerably harder
than describing images. First, it is not straightforward to un-
ambiguously capture with just a few words the salient char-
acteristics of either acoustic scenes (e.g., the sounds heard
in a train station or in a forest) or music (e.g., the melody,
the rhythm, the timbre of vocals and the many instruments
used in accompaniment). Second, audio is structured along
a temporal dimension which makes sequence-wide captions
a much weaker level of annotation than an image caption.
In this work, we introduce MusicLM, a model for genera-
ting high-fidelity music from text descriptions. MusicLM
leverages AudioLM’s multi-stage autoregressive modeling
as the generative component, while extending it to incor-
porate text conditioning. To address the main challenge of
paired data scarcity, we rely on MuLan (Huang et al., 2022),
a joint music-text model that is trained to project music and
its corresponding text description to representations close to
each other in an embedding space. This shared embedding
space eliminates the need for captions at training time alto-arXiv:2301.11325v1 [cs.SD] 26 Jan 2023 |