filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
2305.13048.pdf | RWKV: Reinventing RNNs for the Transformer Era
Bo Peng1∗Eric Alcaide2,3,4∗Quentin Anthony2,5∗
Alon Albalak2,6Samuel Arcadinho2,7Huanqi Cao8Xin Cheng9Michael Chung10
Matteo Grella11Kranthi Kiran GV12Xuzheng He2Haowen Hou13Przemysław Kazienko14
Jan Koco ´n14Jiaming Kong15Bartłomiej Koptyra14Hayden Lau2Krishna Sri Ipsit Mantri16
Ferdinand Mom17,18Atsushi Saito2,19Xiangru Tang20Bolun Wang27Johan S. Wind21Stanisław Wo´ zniak14
Ruichong Zhang8Zhenyuan Zhang2Qihang Zhao22,23Peng Zhou27Jian Zhu24Rui-Jie Zhu25,26
1RWKV Foundation2EleutherAI3University of Barcelona4Charm Therapeutics5Ohio State University
6University of California, Santa Barbara7Zendesk8Tsinghua University9Peking University
10Storyteller.io11Crisis2412New York University13National University of Singapore
14Wroclaw University of Science and Technology15Databaker Technology Co. Ltd16Purdue University
17Criteo AI Lab18Epita19Nextremer Co. Ltd.20Yale University21University of Oslo
22University of Science and Technology of China23Kuaishou Technology Co. Ltd
24University of British Columbia25University of California, Santa Cruz
26University of Electronic Science and Technology of China27RuoxinTech
Abstract
Transformers have revolutionized almost all
natural language processing (NLP) tasks but
suffer from memory and computational com-
plexity that scales quadratically with sequence
length. In contrast, recurrent neural networks
(RNNs) exhibit linear scaling in memory and
computational requirements but struggle to
match the same performance as Transform-
ers due to limitations in parallelization and
scalability. We propose a novel model ar-
chitecture, Receptance Weighted Key Value
(RWKV), that combines the efficient paral-
lelizable training of Transformers with the effi-
cient inference of RNNs. Our approach lever-
ages a linear attention mechanism and allows
us to formulate the model as either a Trans-
former or an RNN, which parallelizes compu-
tations during training and maintains constant
computational and memory complexity during
inference, leading to the first non-transformer
architecture to be scaled to tens of billions
of parameters. Our experiments reveal that
RWKV performs on par with similarly sized
Transformers, suggesting that future work can
leverage this architecture to create more effi-
cient models. This work presents a signifi-
cant step towards reconciling the trade-offs be-
tween computational efficiency and model per-
formance in sequence processing tasks.1
1 Introduction
Deep learning techniques have made significant
strides in artificial intelligence, playing a pivotal
∗Equal first authorship. Others listed alphabetically.
1Code at: https://github.com/BlinkDL/RWKV-LMrole in various scientific and industrial applica-
tions. These applications often involve complex
sequential data processing tasks that include nat-
ural language understanding, conversational AI,
time-series analysis, and even indirect modalities
that can be reframed as sequences, such as im-
ages and graphs (Brown et al., 2020; Ismail Fawaz
et al., 2019; Wu et al., 2020; Albalak et al., 2022).
Predominant among these techniques are RNNs,
convolutional neural networks (CNNs), and the
Transformer models (Vaswani et al., 2017).
Each of these has distinct drawbacks that restrict
their efficiency in certain scenarios. RNNs suf-
fer from the vanishing gradient problem, making
them difficult to train for long sequences. Addition-
ally, they cannot be parallelized in the time dimen-
sion during training, which restricts their scalability
(Hochreiter, 1998; Le and Zuidema, 2016). CNNs,
on the other hand, are only adept at capturing local
patterns, which limits their capacity to deal with
long-range dependencies, crucial to many sequence
processing tasks (Bai et al., 2018).
Transformer models emerged as a powerful alter-
native due to their ability to handle both local and
long-range dependencies and their capability for
parallelized training (Tay et al., 2022). Recent mod-
els such as GPT-3 (Brown et al., 2020), ChatGPT
(OpenAI, 2022; Koco ´n et al., 2023), GPT-4 (Ope-
nAI, 2023), LLaMA (Touvron et al., 2023), and
Chinchilla (Hoffmann et al., 2022) exemplify the
capability of this architecture, pushing the frontiers
of what’s possible in NLP. Despite these signifi-
cant advancements, the self-attention mechanism
inherent to Transformers poses unique challenges,arXiv:2305.13048v1 [cs.CL] 22 May 2023 |
2023.12.07.570727v1.full.pdf | ProteinGym: Large-Scale Benchmarks for Protein
Design and Fitness Prediction
Pascal Notin†∗
Computer Science,
University of OxfordAaron W. Kollasch†
Systems Biology,
Harvard Medical SchoolDaniel Ritter†
Systems Biology,
Harvard Medical School
Lood van Niekerk†
Systems Biology,
Harvard Medical SchoolSteffanie Paul
Systems Biology,
Harvard Medical SchoolHansen Spinner
Systems Biology,
Harvard Medical School
Nathan Rollins
Seismic TherapeuticAda Shaw
Applied Mathematics,
Harvard UniversityRuben Weitzman
Computer Science,
University of Oxford
Jonathan Frazer
Centre for Genomic Regulation
Universitat Pompeu FabraMafalda Dias
Centre for Genomic Regulation
Universitat Pompeu FabraDinko Franceschi
Systems Biology,
Harvard Medical School
Rose Orenbuch
Systems Biology,
Harvard Medical SchoolYarin Gal
Computer Science,
University of OxfordDebora S. Marks∗
Harvard Medical School
Broad Institute
Abstract
Predicting the effects of mutations in proteins is critical to many applications, from
understanding genetic disease to designing novel proteins that can address our
most pressing challenges in climate, agriculture and healthcare. Despite a surge in
machine learning-based protein models to tackle these questions, an assessment of
their respective benefits is challenging due to the use of distinct, often contrived,
experimental datasets, and the variable performance of models across different
protein families. Addressing these challenges requires scale. To that end we
introduce ProteinGym, a large-scale and holistic set of benchmarks specifically
designed for protein fitness prediction and design. It encompasses both a broad
collection of over 250 standardized deep mutational scanning assays, spanning
millions of mutated sequences, as well as curated clinical datasets providing high-
quality expert annotations about mutation effects. We devise a robust evaluation
framework that combines metrics for both fitness prediction and design, factors
in known limitations of the underlying experimental methods, and covers both
zero-shot and supervised settings. We report the performance of a diverse set
of over 70 high-performing models from various subfields (eg., alignment-based,
inverse folding) into a unified benchmark suite. We open source the corresponding
codebase, datasets, MSAs, structures, model predictions and develop a user-friendly
website that facilitates data access and analysis.
∗Correspondence: pascal.notin@cs.ox.ac.uk, kollasch@g.harvard.edu, danieldritter1@gmail.com,
loodvn@gmail.com, debbie@hms.harvard.edu ; †Equal contribution
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks.. CC-BY-NC-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted December 8, 2023. ; https://doi.org/10.1101/2023.12.07.570727doi: bioRxiv preprint |
1511.06349.pdf | Generating Sentences from a Continuous Space
Samuel R. Bowman∗
NLP Group and Dept. of Linguistics
Stanford University
sbowman@stanford.eduLuke Vilnis∗
CICS
University of Massachusetts Amherst
luke@cs.umass.edu
Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz & Samy Bengio
Google Brain
{vinyals, adai, rafalj, bengio }@google.com
Abstract
The standard recurrent neural network
language model ( rnnlm ) generates sen-
tences one word at a time and does not
work from an explicit global sentence rep-
resentation. In this work, we introduce
and study an rnn-based variational au-
toencoder generative model that incorpo-
rates distributed latent representations of
entire sentences. This factorization al-
lows it to explicitly model holistic prop-
erties of sentences such as style, topic,
and high-level syntactic features. Samples
from the prior over these sentence repre-
sentations remarkably produce diverse and
well-formed sentences through simple de-
terministic decoding. By examining paths
through this latent space, we are able to
generate coherent novel sentences that in-
terpolate between known sentences. We
present techniques for solving the difficult
learning problem presented by this model,
demonstrate its effectiveness in imputing
missing words, explore many interesting
properties of the model’s latent sentence
space, and present negative results on the
use of the model in language modeling.
1 Introduction
Recurrent neural network language models
(rnnlm s, Mikolov et al., 2011) represent the state
of the art in unsupervised generative modeling
for natural language sentences. In supervised
settings, rnnlm decoders conditioned on task-
specific features are the state of the art in tasks
like machine translation (Sutskever et al., 2014;
Bahdanau et al., 2015) and image captioning
(Vinyals et al., 2015; Mao et al., 2015; Donahue
et al., 2015). The rnnlm generates sentences
word-by-word based on an evolving distributed
state representation, which makes it a proba-
bilistic model with no significant independence
∗First two authors contributed equally. Work was
done when all authors were at Google, Inc.i went to the store to buy some groceries .
i store to buy some groceries .
i were to buy any groceries .
horses are to buy any groceries .
horses are to buy any animal .
horses the favorite any animal .
horses the favorite favorite animal .
horses are my favorite animal .
Table 1: Sentences produced by greedily decoding
from points between two sentence encodings with
a conventional autoencoder. The intermediate sen-
tences are not plausible English.
assumptions, and makes it capable of modeling
complex distributions over sequences, including
those with long-term dependencies. However, by
breaking the model structure down into a series of
next-step predictions, the rnnlm does not expose
an interpretable representation of global features
like topic or of high-level syntactic properties.
We propose an extension of the rnnlm that is
designed to explicitly capture such global features
in a continuous latent variable. Naively, maxi-
mum likelihood learning in such a model presents
an intractable inference problem. Drawing inspi-
ration from recent successes in modeling images
(Gregor et al., 2015), handwriting, and natural
speech (Chung et al., 2015), our model circum-
vents these difficulties using the architecture of a
variational autoencoder and takes advantage of re-
cent advances in variational inference (Kingma and
Welling, 2015; Rezende et al., 2014) that introduce
a practical training technique for powerful neural
network generative models with latent variables.
Our contributions are as follows: We propose a
variational autoencoder architecture for text and
discuss some of the obstacles to training it as well
as our proposed solutions. We find that on a stan-
dard language modeling evaluation where a global
variable is not explicitly needed, this model yields
similar performance to existing rnnlm s. We also
evaluate our model using a larger corpus on the
task of imputing missing words. For this task,
we introduce a novel evaluation strategy using anarXiv:1511.06349v4 [cs.LG] 12 May 2016 |
2402.16819.pdf | Nemotron-4 15B Technical Report
Jupinder Parmar*Shrimai Prabhumoye∗Joseph Jennings∗Mostofa Patwary∗
Sandeep Subramanian†Dan Su Chen Zhu Deepak Narayanan Aastha Jhunjhunwala Ayush
Dattagupta Vibhu Jawa Jiwei Liu Ameya Mahabaleshwarkar Osvald Nitski Annika
Brundyn James Maki Miguel Martinez Jiaxuan You John Kamalu Patrick LeGresley
Denys Fridman Jared Casper Ashwath Aithal Oleksii Kuchaiev Mohammad Shoeybi
Jonathan Cohen Bryan Catanzaro
NVIDIA
Abstract
We introduce Nemotron-4 15B, a 15-billion-parameter large multilingual language model trained
on 8 trillion text tokens. Nemotron-4 15B demonstrates strong performance when assessed on English,
multilingual, and coding tasks: it outperforms all existing similarly-sized open models on 4 out of 7
downstream evaluation areas and achieves competitive performance to the leading open models in the
remaining ones. Specifically, Nemotron-4 15B exhibits the best multilingual capabilities of all similarly-
sized models, even outperforming models over four times larger and those explicitly specialized for
multilingual tasks.
1 Introduction
Recently published efforts (Hoffmann et al., 2022; Touvron et al., 2023a,b; Yang et al., 2023; Jiang et al.,
2023) in language model pre-training have been inspired by Chinchilla scaling laws (Hoffmann et al., 2022),
which argue for scaling data along with model size given a fixed compute budget, compared to past work
that only scaled the size of the model (Kaplan et al., 2020; Brown et al., 2020; Smith et al., 2022; Rae et al.,
2022; Scao et al., 2023). For example, (Hoffmann et al., 2022) shows that given two roughly IsoFLOP
GPT models with a similar data distribution, a 65-billion-parameter model on 1.4 trillion tokens and a 280-
billion-parameter model on 300 billion tokens, the 65B model has better accuracy on downstream tasks.
This trade-off of allocating compute towards training on more data as opposed to increasing model size is
particularly appealing from an inference perspective, reducing latency and the amount of compute needed
to serve models. As a consequence, a major focus of language modeling training efforts has shifted to col-
lecting high-quality multi-trillion token datasets from public sources such as Common Crawl. We continue
this trend by introducing Nemotron-4 15B which was trained on 8 trillion tokens of English, multilingual,
*Equal contribution, corresponding authors: {jupinderp,sprabhumoye,jjennings,mpatwary }@nvidia.com .
†Work done while at NVIDIA.
1arXiv:2402.16819v1 [cs.CL] 26 Feb 2024 |
2203.05482.pdf | Model soups: averaging weights of multiple fine-tuned models
improves accuracy without increasing inference time
Mitchell Wortsman1Gabriel Ilharco1Samir Yitzhak Gadre2Rebecca Roelofs3Raphael Gontijo-Lopes3
Ari S. Morcos4Hongseok Namkoong2Ali Farhadi1Yair Carmon* 5Simon Kornblith* 3Ludwig Schmidt* 1
Abstract
The conventional recipe for maximizing model
accuracy is to (1) train multiple models with var-
ious hyperparameters and (2) pick the individ-
ual model which performs best on a held-out
validation set, discarding the remainder. In this
paper, we revisit the second step of this proce-
dure in the context of fine-tuning large pre-trained
models, where fine-tuned models often appear
to lie in a single low error basin. We show that
averaging the weights of multiple models fine-
tuned with different hyperparameter configura-
tions often improves accuracy and robustness. Un-
like a conventional ensemble, we may average
many models without incurring any additional
inference or memory costs—we call the results
“model soups.” When fine-tuning large pre-trained
models such as CLIP, ALIGN, and a ViT-G pre-
trained on JFT, our soup recipe provides signifi-
cant improvements over the best model in a hy-
perparameter sweep on ImageNet. The result-
ing ViT-G model, which attains 90.94% top-1
accuracy on ImageNet, achieved a new state of
the art. Furthermore, we show that the model
soup approach extends to multiple image clas-
sification and natural language processing tasks,
improves out-of-distribution performance, and im-
proves zero-shot performance on new downstream
tasks. Finally, we analytically relate the perfor-
mance similarity of weight-averaging and logit-
ensembling to flatness of the loss and confidence
of the predictions, and validate this relation em-
pirically. Code is available at https://github.
com/mlfoundations/model-soups .
*Equal contribution1University of Washington2Columbia Uni-
versity3Google Research, Brain Team4Meta AI Research5Tel
Aviv University. Correspondence to: <mitchnw@uw.edu >.
Proceedings of the 39thInternational Conference on Machine
Learning , Baltimore, Maryland, USA, PMLR 162, 2022. Copy-
right 2022 by the author(s).
75 76 77 78 79 80 81
ImageNet Accuracy (top-1, %)3540455055Avg. accuracy on 5 distribution shiftsGreedy Soup
Uniform Soup
Initialization
Various
hyperparametersFigure 1: Model soups improve accuracy over the best individual
model when performing a large, random hyperparameter search
for fine-tuning a CLIP ViT-B/32 model on ImageNet. The uniform
soup (blue circle) averages all fine-tuned models (green diamonds)
in a random hyperparameter search over learning rate, weight-
decay, iterations, data augmentation, mixup, and label smoothing.
The greedy soup adds models sequentially to the model soup,
keeping a model in the soup if accuracy on the held-out validation
set does not decrease.
Method ImageNet acc. Distribution
(top-1, %) shifts
ViT-G (Zhai et al., 2021) 90.45 –
CoAtNet-7 (Dai et al., 2021) 90.88 –
Our models/evaluations based on ViT-G:
ViT-G (reevaluated) 90.47 82.06
Best model in 90.78 84.68
hyperparam search
Greedy soup 90.94 85.02
Table 1: Model soups improve accuracy over the best individual
model when fine-tuning a JFT-3B pre-trained ViT-G/14 model on
ImageNet. Instead of selecting the best model from a hyperparam-
eter sweep during fine-tuning, model soups average the weights
of multiple fine-tuned models. To evaluate performance under
distribution shift we consider average accuracy on ImageNet-V2,
ImageNet-R, ImageNet-Sketch, ObjectNet, and ImageNet-A. Ad-
ditional details are provided by Table 4 and Section 3.3.2, while
analogous results for BASIC (Pham et al., 2021) are in Appendix C.arXiv:2203.05482v3 [cs.LG] 1 Jul 2022 |
noise-contrastive-estimation.pdf | Journalof Machine LearningResearch 13(2012)307-361 Submi tted 12/10;Revised 11/11;Published2/12
Noise-ContrastiveEstimationof UnnormalizedStatistical Models,
with Applications toNatural ImageStatistics
Michael U.Gutmann MICHAEL .GUTMANN @HELSINKI .FI
AapoHyv ¨arinen AAPO.HYVARINEN @HELSINKI .FI
Department of Computer Science
Department of Mathematics and Statistics
Helsinki Institutefor Information Technology HIIT
Universityof Helsinki, Finland
Editor:Yoshua Bengio
Abstract
Weconsiderthetaskofestimating,fromobserveddata,apro babilisticmodelthatisparameterized
by a finite number of parameters. In particular, we are consid ering the situation where the model
probabilitydensityfunctionisunnormalized. Thatis,the modelisonlyspecifieduptothepartition
function. The partition function normalizes a model so that it integrates to one for any choice of
the parameters. However, it is often impossible to obtain it in closed form. Gibbs distributions,
Markov and multi-layer networks are examples of models wher e analytical normalization is often
impossible. Maximum likelihood estimation can then not be u sed without resorting to numerical
approximationswhichareoftencomputationallyexpensive . Weproposehereanewobjectivefunc-
tion for the estimation of both normalized and unnormalized models. The basic idea is to perform
nonlinearlogisticregressiontodiscriminatebetweenthe observeddataandsomeartificiallygener-
ated noise. With this approach, the normalizing partition f unction can be estimated like any other
parameter. We prove that the new estimation method leads to a consistent (convergent) estimator
of the parameters. For large noise sample sizes, the new esti mator is furthermore shown to be-
have like the maximum likelihood estimator. In the estimati on of unnormalized models, there is a
trade-offbetweenstatisticalandcomputationalperforma nce. Weshowthatthenewmethodstrikes
acompetitivetrade-offincomparisontootherestimationm ethodsforunnormalizedmodels. Asan
application to real data, we estimate novel two-layer model s of natural image statistics with spline
nonlinearities.
Keywords: unnormalized models, partition function, computation, es timation, natural image
statistics
1. Introduction
This paper is about parametric density estimation, where the general setup is as follows. A sample
X= (x1,...,xTd)of a random vector x∈Rnis observed which follows an unknown probabil-
ity density function (pdf) pd. The data-pdf pdis modeled by a parameterized family of functions
{pm(.;θ)}θwhere θis a vector of parameters. It is commonly assumed that pdbelongs to this
family. Inotherwords, pd(.) =pm(.;θ⋆)forsomeparameter θ⋆. Theparametricdensityestimation
problemisthenaboutfinding θ⋆fromtheobservedsample X. Anyestimate ˆθmustyieldaproperly
c⃝2012Michael U. GutmannandAapo Hyv ¨arinen. |
1611.03530.pdf | UNDERSTANDING DEEP LEARNING REQUIRES RE -
THINKING GENERALIZATION
Chiyuan Zhang∗
Massachusetts Institute of Technology
chiyuan@mit.eduSamy Bengio
Google Brain
bengio@goog/l.Vare.comMoritz Hardt
Google Brain
mrtz@goog/l.Vare.com
Benjamin Recht†
University of California, Berkeley
brecht@berke/l.Varey.eduOriol Vinyals
Google DeepMind
vinya/l.Vars@goog/l.Vare.com
ABSTRACT
Despite their massive size, successful deep artificial neural networks can exhibit a
remarkably small difference between training and test performance. Conventional
wisdom attributes small generalization error either to properties of the model fam-
ily, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional ap-
proaches fail to explain why large neural networks generalize well in practice.
Specifically, our experiments establish that state-of-the-art convolutional networks
for image classification trained with stochastic gradient methods easily fit a ran-
dom labeling of the training data. This phenomenon is qualitatively unaffected
by explicit regularization, and occurs even if we replace the true images by com-
pletely unstructured random noise. We corroborate these experimental findings
with a theoretical construction showing that simple depth two neural networks al-
ready have perfect finite sample expressivity as soon as the number of parameters
exceeds the number of data points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models.
1 I NTRODUCTION
Deep artificial neural networks often have far more trainable model parameters than the number of
samples they are trained on. Nonetheless, some of these models exhibit remarkably small gener-
alization error , i.e., difference between “training error” and “test error”. At the same time, it is
certainly easy to come up with natural model architectures that generalize poorly. What is it then
that distinguishes neural networks that generalize well from those that don’t? A satisfying answer
to this question would not only help to make neural networks more interpretable, but it might also
lead to more principled and reliable model architecture design.
To answer such a question, statistical learning theory has proposed a number of different complexity
measures that are capable of controlling generalization error. These include VC dimension (Vapnik,
1998), Rademacher complexity (Bartlett & Mendelson, 2003), and uniform stability (Mukherjee
et al., 2002; Bousquet & Elisseeff, 2002; Poggio et al., 2004). Moreover, when the number of
parameters is large, theory suggests that some form of regularization is needed to ensure small
generalization error. Regularization may also be implicit as is the case with early stopping.
1.1 O UR CONTRIBUTIONS
In this work, we problematize the traditional view of generalization by showing that it is incapable
of distinguishing between different neural networks that have radically different generalization per-
formance.
∗Work performed while interning at Google Brain.
†Work performed at Google Brain.
arXiv:1611.03530v2 [cs.LG] 26 Feb 2017 |
2310.03214.pdf | Preprint
FRESH LLM S:
REFRESHING LARGE LANGUAGE MODELS
WITH SEARCH ENGINE AUGMENTATION
Tu Vu1Mohit Iyyer2Xuezhi Wang1Noah Constant1Jerry Wei1
Jason Wei3∗Chris Tar1Yun-Hsuan Sung1Denny Zhou1Quoc Le1Thang Luong1
Google1University of Massachusetts Amherst2OpenAI3
freshllms@google.com
ABSTRACT
Most large language models ( LLM S) are trained once and never updated; thus,
they lack the ability to dynamically adapt to our ever-changing world. In this
work, we perform a detailed study of the factuality of LLM -generated text in the
context of answering questions that test current world knowledge. Specifically, we
introduce FRESH QA, a novel dynamic QAbenchmark encompassing a diverse range
of question and answer types, including questions that require fast-changing world
knowledge as well as questions with false premises that need to be debunked. We
benchmark a diverse array of both closed and open-source LLM Sunder a two-mode
evaluation procedure that allows us to measure both correctness and hallucination.
Through human evaluations involving more than 50K judgments, we shed light on
limitations of these models and demonstrate significant room for improvement: for
instance, all models (regardless of model size) struggle on questions that involve
fast-changing knowledge and false premises. Motivated by these results, we present
FRESH PROMPT , a simple few-shot prompting method that substantially boosts the
performance of an LLM onFRESH QAby incorporating relevant and up-to-date
information retrieved from a search engine into the prompt. Our experiments
show that FRESH PROMPT outperforms both competing search engine-augmented
prompting methods such as SELF-ASK(Press et al., 2022) as well as commercial
systems such as PERPLEXITY .AI.1Further analysis of FRESH PROMPT reveals that
both the number of retrieved evidences and their order play a key role in influencing
the correctness of LLM -generated answers. Additionally, instructing the LLM
to generate concise and direct answers helps reduce hallucination compared to
encouraging more verbose answers. To facilitate future work, we release FRESH QA
atgithub.com/freshllms/freshqa and commit to updating it at regular intervals.
1 I NTRODUCTION
Recent large language models ( LLM S) such as BARD and CHATGPT/GPT-42are designed to be
versatile open-domain chatbots that can engage in multi-turn conversations on diverse subjects.
Despite their impressive capabilities, these LLM Soften “hallucinate” plausible but factually incorrect
information (Maynez et al., 2020; Liu et al., 2023b), which reduces the trustworthiness of their
responses, especially in settings where accurate and up-to-date information is critical. This behavior
can be partially attributed to the presence of outdated knowledge encoded in their parameters. While
additional training using human feedback (Ouyang et al., 2022) or knowledge-enhanced tasks can
mitigate this issue, it is not easily scalable for real-time knowledge updates (e.g., stock price of a
company). In-context learning (Brown et al., 2020) is an appealing alternative in which real-time
knowledge can be injected into an LLM ’s prompt for conditioning generation. While recent work
has begun to explore augmenting LLM Swith web search results (Lazaridou et al., 2022; Press et al.,
2022), it is unclear how to take full advantage of search engine outputs to increase LLM factuality.
∗Work done while at Google.
1https://www.perplexity.ai
2https://bard.google.com ,https://chat.openai.com
1arXiv:2310.03214v1 [cs.CL] 5 Oct 2023 |
2111.02080v6.pdf | An Explanation of In-context Learning as Implicit
Bayesian Inference
Sang Michael Xie
Stanford University
xie@cs.stanford.eduAditi Raghunathan
Stanford University
aditir@stanford.edu
Percy Liang
Stanford University
pliang@cs.stanford.eduTengyu Ma
Stanford University
tengyuma@cs.stanford.edu
Abstract
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, where
the model learns to do a downstream task simply by conditioning on a prompt consisting of input-output
examples. The LM learns from these examples without being explicitly pretrained to learn . Thus, it is unclear what
enables in-context learning. In this paper, we study how in-context learning can emerge when pretraining
documents have long-range coherence. Here, the LM must infer a latent document-level concept to generate
coherent next tokens during pretraining. At test time, in-context learning occurs when the LM also infers
a shared latent concept between examples in a prompt. We prove when this occurs despite a distribution
mismatch between prompts and pretraining data in a setting where the pretraining distribution is a mixture of
HMMs. In contrast to messy large-scale datasets used to train LMs capable of in-context learning, we generate
a small-scale synthetic dataset (GINC) where Transformers and LSTMs both exhibit in-context learning1.
Beyond the theory, experiments on GINC exhibit large-scale real-world phenomena including improved
in-context performance with model scaling (despite the same pretraining loss), sensitivity to example order,
and instances where zero-shot is better than few-shot in-context learning.
1 Introduction
Large language models (LMs) such as GPT-3 (Brown et al., 2020, Lieber et al., 2021, Radford et al.,
2019, Wang and Komatsuzaki, 2021) are pretrained on massive text corpora to predict the next word
given previous words. They demonstrate the surprising ability to do in-context learning , where an
LM “learns” to do a task simply by conditioning on a prompt containing input-output pairs, achiev-
ing SOTA results on LAMBADA (Paperno et al., 2016) and TriviaQA (Joshi et al., 2017) tasks (18%
and 3% over previous SOTA (Brown et al., 2020)). For example, consider the task of predicting
nationalities from names. A prompt (Figure 1) is constructed by concatenating independent “train-
ing” examples (e.g., “Albert Einstein was German”) followed by a “test example” (“Marie Curie
was”). Conditioning on this prompt, GPT-3 places the largest probability on the correct output
p(“Polish”|“Albert Einstein was German \n Mahatma Gandhi was Indian \n Marie Curie was” )
1The code, data, and experiments are located on GitHub and CodaLab.
1arXiv:2111.02080v6 [cs.CL] 21 Jul 2022 |
2110.04374.pdf | A Few More Examples May Be Worth Billions of Parameters
Yuval Kirstain♠Patrick Lewis†‡Sebastian Riedel†‡Omer Levy♠‡
♠Tel-Aviv University
†University College London
‡Facebook AI Research
{yuval.kirstain,levyomer }@cs.tau.ac.il ,{patrick.lewis,s.riedel }@cs.ucl.ac.uk
Abstract
We investigate the dynamics of increasing the
number of model parameters versus the num-
ber of labeled examples across a wide variety
of tasks. Our exploration reveals that while
scaling parameters consistently yields perfor-
mance improvements, the contribution of addi-
tional examples highly depends on the task’s
format. Specifically, in open question answer-
ing tasks, enlarging the training set does not
improve performance. In contrast, classifica-
tion, extractive question answering, and mul-
tiple choice tasks benefit so much from addi-
tional examples that collecting a few hundred
examples is often “worth” billions of parame-
ters. We hypothesize that unlike open question
answering, which involves recalling specific
information, solving strategies for tasks with
a more restricted output space transfer across
examples, and can therefore be learned with
small amounts of labeled data.1
1 Introduction
Recent work on few-shot learning for natural lan-
guage tasks explores the dynamics of scaling up
either the number of model parameters (Brown
et al., 2020) or labeled examples (Le Scao and
Rush, 2021), while controlling for the other vari-
able by setting it to a constant. For example, Brown
et al. (2020) focus on in-context learning from
roughly 32 to 64 examples, a practice that was
adopted by fine-tuning approaches as well (Schick
and Sch ¨utze, 2021b; Gao et al., 2021b; Tam et al.,
2021); however, there are many practical few-shot
scenarios where hundreds of examples can be col-
lected at a relatively low effort.2Other work experi-
ments with single-size models (Schick and Schutze,
1Our code is publicly available: https://github.
com/yuvalkirstain/lm-evaluation-harness .
2In SQuAD (Rajpurkar et al., 2016), for example, the
average annotation pace is around one minute per question,
producing 480 examples in a single 8-hour workday.
S B L XL
Model Size2048
512
128
32#Examples
4.8 10.3 17.7 26.25.8 9.7 17.4 27.35.8 11.0 18.7 27.55.7 9.6 17.1 25.2TriviaQA (Open)
S B L XL
Model Size2048
512
128
32#Examples
37.2 34.7 39.4 56.440.5 51.1 55.3 73.242.5 49.6 70.5 77.751.1 59.8 78.9 84.8SQuAD 2 (Extractive)Figure 1: Open QA tasks (e.g. TriviaQA) benefit from
additional parameters exclusively, while extractive QA
tasks (e.g. SQuAD 2) benefit from both larger models
and more labeled data.
2020; Ram et al., 2021; Le Scao and Rush, 2021;
Gao et al., 2021b), even though larger (or smaller)
models may exhibit different behavior. Further-
more, much of the literature focuses on classifica-
tion tasks (Schick and Sch ¨utze, 2021a; Gao et al.,
2021b; Le Scao and Rush, 2021), leaving it unclear
whether their conclusions generalize to tasks with
less restricted output spaces.
In this paper, we conduct a systematic explo-
ration of few-shot learning for language tasks,
where we investigate the dynamics of increasing
the number of model parameters (using different
sizes of the self-supervised T5 (Raffel et al., 2020))
versus the number of target-task labeled exam-
ples (from 32 to 2048) across a variety of tasks,
including not only classification, but also extrac-
tive, multiple-choice, and open question answer-
ing. Overall, we evaluate 192 scenarios by training
7,680 models to control for hyperparameters and
random seeds.
Our experiments show that, surprisingly, the con-
tribution of additional parameters versus additional
labeled examples highly depends on the format
of the task. For open QA tasks, such as the open-
domain version of Natural Questions (Kwiatkowski
et al., 2019; Lee et al., 2019), which require the
model to recall specific information seen duringarXiv:2110.04374v1 [cs.CL] 8 Oct 2021 |
10.7554.eLife.50524.001.pdf | *For correspondence:
ronlevy@temple.edu
Competing interests: The
authors declare that no
competing interests exist.
Funding: See page 20
Received: 25 July 2019
Accepted: 09 September 2019
Published: 08 October 2019
Reviewing editor: Patricia J
Wittkopp, University of
Michigan, United States
Copyright Biswas et al. This
article is distributed under the
terms of the Creative Commons
Attribution License, which
permits unrestricted use and
redistribution provided that the
original author and source are
credited.Epistasis and entrenchment of drug
resistance in HIV-1 subtype B
Avik Biswas1,2, Allan Haldane1,2, Eddy Arnold3,4, Ronald M Levy1,2,5*
1Center for Biophysics and Computational Biology, Temple University, Philadelphia,
United States;2Department of Physics, Temple University, Philadelphia, United
States;3Center for Advanced Biotechnology and Medicine, Rutgers University,
Piscataway, United States;4Department of Chemistry and Chemical Biology,
Rutgers University, Piscataway, United States;5Department of Chemistry, Temple
University, Philadelphia, United States
Abstract The development of drug resistance in HIV is the result of primary mutations whose
effects on viral fitness depend on the entire genetic background, a phenomenon called ‘epistasis’.
Based on protein sequences derived from drug-experienced patients in the Stanford HIV database,
we use a co-evolutionary (Potts) Hamiltonian model to provide direct confirmation of epistasis
involving many simultaneous mutations. Building on earlier work, we show that primary mutations
leading to drug resistance can become highly favored (or entrenched) by the complex mutation
patterns arising in response to drug therapy despite being disfavored in the wild-type background,
and provide the first confirmation of entrenchment for all three drug-target proteins: protease,
reverse transcriptase, and integrase; a comparative analysis reveals that NNRTI-induced mutations
behave differently from the others. We further show that the likelihood of resistance mutations can
vary widely in patient populations, and from the population average compared to specific
molecular clones.
DOI: https://doi.org/10.7554/eLife.50524.001
Introduction
HIV mutates rapidly as it jumps from host to host, acquiring resistance to each host’s distinct
immune response and applied drug regimen. Drug-resistance mutations (DRMs) arise when the virus
evolves under selective pressure due to antiretroviral therapy (ART). Primary DRMs often incur a fit-
ness penalty which is then compensated for by accompanying associated mutations ( Heeney et al.,
2006 ;Shafer and Schapiro, 2008 ). With the use of current robust inhibitors in drug therapy, the
drug-resistance mutation patterns in HIV have become increasingly more complex ( Richman et al.,
2004a ;Iyidogan and Anderson, 2014 ) often leading to ART failure in patients. Resistance is esti-
mated to develop in up to 50% of patients undergoing monotherapy ( Richman et al., 2004b ) and
up to 30% of patients receiving current combination antiretroviral therapy (c-ART) ( Gupta et al.,
2008 ). The primary drug targets in treatment of HIV are the enzymes coded by the polgene, reverse
transcriptase (RT), protease (PR), and integrase (IN). A large number of sequences of HIV are avail-
able for RT, PR, and IN for patients who have been treated during the past nearly 30 years, and this
information permits critical sequence-based informatic analysis of drug resistance.
The selective pressure of drug therapy modulates patterns of correlated mutations at residue
positions which are both near and distal from the active site ( Chang and Torbett, 2011 ;Haq et al.,
2012 ;Flynn et al., 2015 ;Yilmaz and Schiffer, 2017 ). A mutation’s impact on the stability or fitness
of a protein however is dependent on the entire genetic background in which it occurs: a phenome-
non known as ’epistasis’. Drug resistance develops as these mutations accumulate, providing the
virus a fitness benefit in the presence of drug pressure, with a complex interplay in the roles of
Biswas et al. eLife 2019;8:e50524. DOI: https://doi.org/10.7554/eLife.50524 1 of 30RESEARCH ARTICLE |
1608.03983v5.pdf | Published as a conference paper at ICLR 2017
SGDR: S TOCHASTIC GRADIENT DESCENT WITH
WARM RESTARTS
Ilya Loshchilov & Frank Hutter
University of Freiburg
Freiburg, Germany,
{ilya,fh}@cs.uni-freiburg.de
ABSTRACT
Restart techniques are common in gradient-free optimization to deal with multi-
modal functions. Partial warm restarts are also gaining popularity in gradient-
based optimization to improve the rate of convergence in accelerated gradient
schemes to deal with ill-conditioned functions. In this paper, we propose a sim-
ple warm restart technique for stochastic gradient descent to improve its anytime
performance when training deep neural networks. We empirically study its per-
formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new
state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate
its advantages on a dataset of EEG recordings and on a downsampled version of
the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR
1 I NTRODUCTION
Deep neural networks (DNNs) are currently the best-performing method for many classification
problems, such as object recognition from images (Krizhevsky et al., 2012a; Donahue et al., 2014)
or speech recognition from audio data (Deng et al., 2013). Their training on large datasets (where
DNNs perform particularly well) is the main computational bottleneck: it often requires several
days, even on high-performance GPUs, and any speedups would be of substantial value.
The training of a DNN with nfree parameters can be formulated as the problem of minimizing a
functionf: I Rn→I R. The commonly used procedure to optimize fis to iteratively adjust xt∈I Rn
(the parameter vector at time step t) using gradient information ∇ft(xt)obtained on a relatively
smallt-th batch of bdatapoints. The Stochastic Gradient Descent (SGD) procedure then becomes
an extension of the Gradient Descent (GD) to stochastic optimization of fas follows:
xt+1=xt−ηt∇ft(xt), (1)
whereηtis a learning rate. One would like to consider second-order information
xt+1=xt−ηtH−1
t∇ft(xt), (2)
but this is often infeasible since the computation and storage of the inverse Hessian H−1
tis in-
tractable for large n. The usual way to deal with this problem by using limited-memory quasi-
Newton methods such as L-BFGS (Liu & Nocedal, 1989) is not currently in favor in deep learning,
not the least due to (i) the stochasticity of ∇ft(xt), (ii) ill-conditioning of fand (iii) the presence
of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu
& Amari, 2000). Despite some recent progress in understanding and addressing the latter problems
(Bordes et al., 2009; Dauphin et al., 2014; Choromanska et al., 2014; Dauphin et al., 2015), state-of-
the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g.,
by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler, 2012) and
Adam (Kingma & Ba, 2014) are notable examples of such methods.
1arXiv:1608.03983v5 [cs.LG] 3 May 2017 |
2402.05120.pdf | More Agents Is All You Need
Junyou Li* 1Qin Zhang* 1Yangbin Yu1Qiang Fu1Deheng Ye1
Abstract
We find that, simply via a sampling-and-voting
method, the performance of large language mod-
els (LLMs) scales with the number of agents in-
stantiated. Also, this method is orthogonal to
existing complicated methods to further enhance
LLMs, while the degree of enhancement is cor-
related to the task difficulty. We conduct com-
prehensive experiments on a wide range of LLM
benchmarks to verify the presence of our finding,
and to study the properties that can facilitate its
occurrence. Our code is publicly available at: Git.
1. Introduction
Although large language models (LLMs) demonstrate re-
markable capabilities in variety of applications (Zhao et al.,
2023), such as language generation, understanding, and
reasoning, they struggle to provide accurate answers when
faced with complicated tasks. To improve the performance
of LLMs, some of recent studies focus on ensemble meth-
ods (Wang et al., 2023b; Wan et al., 2024) and multiple
LLM-Agents collaboration frameworks (Du et al., 2023;
Wu et al., 2023).
In these works, multiple LLM agents are used to improve
the performance of LLMs. For instance, LLM-Debate (Du
et al., 2023) employs multiple LLM agents in a debate form.
The reasoning performance is improved by creating a frame-
work that allows more than one agent to “debate” the final
answer of arithmetic tasks. They show performance im-
provements compared to using one single agent. Similarly,
CoT-SC (Wang et al., 2023b) generates multiple thought
chains and picks the most self-consistent one as the final an-
swer. The reasoning performance is improved by involving
more thought chains compared to chain-of-thought (CoT)
(Wei et al., 2022) which employs a single thought chain. In-
cidentally, from the data analysis of these works, we can no-
tice the effects of putting multiple agents together, to some
extent, can lead to a performance improvement in certain
problems. For example, in Table 10 of Section 3.3 of LLM-
*Equal contribution1Tencent Inc. Correspondence to: Deheng
Ye<dericye@tencent.com >.
Figure 1. The accuracy increases with ensemble size across
Llama2-13B, Llama2-70B and GPT-3.5-Turbo in GSM8K. When
the ensemble size scales up to 15, Llama2-13B achieves compara-
ble accuracy with Llama2-70B. Similarly, When the ensemble size
scales up to 15and20, Llama2-70B and GPT-3.5-Turbo achieve
comparable accuracy with their more powerful counterparts.
Debate (Du et al., 2023), the authors have reported a prelim-
inary curve: the accuracy of a math problem increases with
the number of debating agents (although the number was
simply increased from 1 to 7). Also, in Wang et al. (2023b),
involving more chains-of-thought pipelines (termed as a
“sample-and-marginalize” decoding procedure), can lead
to a performance gain. We realize that the LLM perfor-
mance may likely be improved by a brute-force scaling
up the number of agents instantiated. However, since the
scaling property of “raw” agents is not the focus of these
works, the scenarios/tasks and experiments considered are
limited. So far, there lacks a dedicated in-depth study on
this phenomenon. Hence, a natural question arises: Does
this phenomenon generally exist?
To answer the research question above, we conduct the first
comprehensive study on the scaling property of LLM agents.
To dig out the potential of multiple agents, we propose to use
a simple(st) sampling-and-voting method, which involves
two phases. First, the query of the task, i.e., the input to
an LLM, is iteratively fed into a single LLM, or a multiple
LLM-Agents collaboration framework, to generate multiple
outputs. Subsequently, majority voting is used to determine
the final result. The procedure is inspired by that of the
CoT-SC, but it does not rely on designing complex CoT
1arXiv:2402.05120v1 [cs.CL] 3 Feb 2024 |
2305.18290.pdf | Direct Preference Optimization:
Your Language Model is Secretly a Reward Model
Rafael Rafailov∗†Archit Sharma∗†Eric Mitchell∗†
Stefano Ermon†‡Christopher D. Manning†Chelsea Finn†
†Stanford University‡CZ Biohub
{rafailov,architsh,eric.mitchell}@cs.stanford.edu
Abstract
While large-scale unsupervised language models (LMs) learn broad world knowl-
edge and some reasoning skills, achieving precise control of their behavior is
difficult due to the completely unsupervised nature of their training. Existing
methods for gaining such steerability collect human labels of the relative quality of
model generations and fine-tune the unsupervised LM to align with these prefer-
ences, often with reinforcement learning from human feedback (RLHF). However,
RLHF is a complex and often unstable procedure, first fitting a reward model that
reflects the human preferences, and then fine-tuning the large unsupervised LM
using reinforcement learning to maximize this estimated reward without drifting
too far from the original model. In this paper, we leverage a mapping between
reward functions and optimal policies to show that this constrained reward maxi-
mization problem can be optimized exactly with a single stage of policy training,
essentially solving a classification problem on the human preference data. The
resulting algorithm, which we call Direct Preference Optimization (DPO), is stable,
performant, and computationally lightweight, eliminating the need for fitting a
reward model, sampling from the LM during fine-tuning, or performing significant
hyperparameter tuning. Our experiments show that DPO can fine-tune LMs to
align with human preferences as well as or better than existing methods. Notably,
fine-tuning with DPO exceeds RLHF’s ability to control sentiment of generations
and improves response quality in summarization and single-turn dialogue while
being substantially simpler to implement and train.
1 Introduction
Large unsupervised language models (LMs) trained on very large datasets acquire surprising capabili-
ties [ 11,7,37,8]. However, these models are trained on data generated by humans with a wide variety
of goals, priorities, and skillsets. Some of these goals and skillsets may not be desirable to imitate; for
example, while we may want our AI coding assistant to understand common programming mistakes
in order to correct them, nevertheless, when generating code, we would like to bias our model toward
the (potentially rare) high-quality coding ability present in its training data. Similarly, we might want
our language model to be aware of a common misconception believed by 50% of people, but we
certainly do not want the model to claim this misconception to be true in 50% of queries about it!
In other words, selecting the model’s desired responses and behavior from its very wide knowledge
and abilities is crucial to building AI systems that are safe, performant, and controllable [ 23]. While
existing methods typically steer LMs to match human preferences using reinforcement learning (RL),
∗Equal contribution; more junior authors listed earlier.
Preprint. Under review.arXiv:2305.18290v1 [cs.LG] 29 May 2023 |
2111.12763.pdf | Sparse is Enough in Scaling Transformers
Sebastian Jaszczur∗
University of WarsawAakanksha Chowdhery
Google ResearchAfroz Mohiuddin
Google ResearchŁukasz Kaiser∗
OpenAI
Wojciech Gajewski
Google ResearchHenryk Michalewski
Google ResearchJonni Kanerva
Google Research
Abstract
Large Transformer models yield impressive results on many tasks, but are expen-
sive to train, or even fine-tune, and so slow at decoding that their use and study
becomes out of reach. We address this problem by leveraging sparsity. We study
sparse variants for all layers in the Transformer and propose Scaling Transformers ,
a family of next generation Transformer models that use sparse layers to scale
efficiently and perform unbatched decoding much faster than the standard Trans-
former as we scale up the model size. Surprisingly, the sparse layers are enough
to obtain the same perplexity as the standard Transformer with the same number
of parameters. We also integrate with prior sparsity approaches to attention and
enable fast inference on long sequences even with limited memory. This results in
performance competitive to the state-of-the-art on long text summarization.
1 Introduction
The field of natural language processing has seen dramatic improvements in recent years due to large
neural networks based on the Transformer architecture. The original Transformer [ 42] significantly
advanced state-of-the-art in machine translation. BERT [ 7] surpassed all previous methods on
question answering, language inference and other NLP tasks and was followed by a line of models
like T5 [ 30] that further improved these results. The GPT line of models [ 29,3] elevated language
generation to the point that GPT-2 was invited to write short passages for the Economist and GPT-3
created whole articles almost indistinguishable from human-written ones.
The benefits of this progress are undercut by the huge costs such models incur. Strubell et al. [36]
estimate that training a single base BERT model costs $4k-$12k and emits as much CO 2as one
passenger’s share of a 4-hour flight and later Patterson et al. [27] estimate that training GPT-3 has
three times as much tCO 2e (metric tons of CO 2equivalent) emissions as a SF-NY round trip flight.
Data and serving costs are also forbidding: a single training run of BERT, for example, processes
128B tokens, and Google Translate reportedly1serves over 143B words per day.
With the growing popularity and size of these models, it is increasingly valuable to make them scale
efficiently. In this work we propose Scaling Transformers with a separate sparse mechanism for the
query, key, value and output layers (QKV layers for short) and combine it with sparse feedforward
blocks to get a fully sparse Transformer architecture.
To quantify the computational complexity of inference in Transformer models, recall the architecture
of a Transformer decoder block. It consists of three parts: a masked self-attention layer, an encoder-
decoder attention layer and a feedforward block. The sizes of these layers are parameterized by dmodel
anddff. The base BERT model sets dmodel = 768 , the large BERT has dmodel = 1024 , the largest
∗Work done while at Google Research.
1https://cutt.ly/skkFJ7a
35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia.arXiv:2111.12763v1 [cs.LG] 24 Nov 2021 |
10.1126.science.aay8015.pdf | STRUCTURAL BIOLOGY
Structural basis for strand-transfer inhibitor binding
to HIV intasomes
Dario Oliveira Passos1*, Min Li2*, Ilona K. Józ ´wik1, Xue Zhi Zhao3, Diogo Santos-Martins4,
Renbin Yang2, Steven J. Smith3, Youngmin Jeon1, Stefano Forli4, Stephen H. Hughes3,
Terrence R. Burke Jr.3, Robert Craigie2, Dmitry Lyumkis1,4†
The HIV intasome is a large nucleoprotein assembly that mediates the integration of a DNA copy of
the viral genome into host chromatin. Intasomes are targeted by the latest generation of antiretroviral
drugs, integrase strand-transfer inhibitors (INSTIs). Challenges associated with lentiviral intasome
biochemistry have hindered high-resolution structural studies of how INSTIs bind to their native drug
target. Here, we present high-resolution cryo –electron microscopy structures of HIV intasomes bound
to the latest generation of INSTIs. These structures highlight how small changes in the integrase active
site can have notable implications for drug binding and design and provide mechanistic insights into
why a leading INSTI retains efficacy against a broad spectrum of drug-resistant variants. The data have
implications for expanding effective treatments available for HIV-infected individuals.
HIV currently infects ~40 million people
worldwide. The virus ’s ability to inte-
grate a viral DNA (vDNA) copy of its
RNA genome into host chromatin, lead-
ing to the establishment of a permanent
and irreversible infection of the target cell(and any progeny cells), is the central chal-
lenge in developing a cure ( 1). Integration, cat-
alyzed by the viral integrase (IN) protein, is
essential for retroviral replication and results
in the covalent linkage of vDNA to the host
genome ( 2,3). Proper integration depends on
the formation of a large oligomeric nucleo-
protein complex containing viral IN assembled
on the ends of vDNA, commonly referred to
as an intasome ( 4–9). All intasomes contain
multimeric IN bound to vDNA ends, but they
are characterized by distinct oligomeric con-
figurations and domain arrangements.
Intasome assembly and catalysis proceed
through a multistep process that involves sev-
eral distinct intermediates (fig. S1). The cat-
alytically competent cle aved synaptic complex
(CSC) intasome, which contains free 3 ′-OH
ends, is the specific target of the IN strand-
transfer inhibitors (INSTIs), a group of drugs
that bind to both the active site of HIV IN
and the ends of vDNA, thereby blocking ca-
talysis. Treatment with INSTIs, which are a key
component of combined antiretroviral thera-
py, leads to a rapid decrease in viral load in
patients. INSTIs are generally well tolerated,
and the second-generation drugs do not read-
ily select for resistance ( 10–13). They are used
in the recommended first-line combinationtherapies for treating HIV-infected patients
and are prime candidates for future develop-
ment ( 14,15).
The prototype foamy virus (PFV) intasome
has been used as a model system to under-
stand INSTI binding ( 6,16–19). However, this
system has limitations. PFV and HIV INs share
only ~25% of sequence identity in the catalytic
core domain (CCD) ( 6), and many of the sites
where drug-resistance mutations occur in HIV
IN are not conserved in PFV IN. Moreover,
minor changes in the structure of an INSTI can
profoundly affect its ability to inhibit mutant
forms of HIV ( 19,20). Thus, understanding
how INSTIs interact with HIV intasomes —
their natural target —at a molecular level is
needed to overcome drug resistance and to
guide development of improved inhibitors.
We established conditions for assembling,
purifying, and structurally characterizing HIV
CSC intasomes. Previously, we have shown
that fusion of the small protein Sso7d to the
N-terminal domain (NTD) of HIV IN improves
its solubility and facilitates assembly and puri-
fication of strand-transfer complex intasomes
(4,21
). We further optimized conditions re-
quired for CSC formation and purification
and showed that these complexes are bio-
chemically active for concerted integration
(fig. S2). We used a tilted cryo –electron mi-
croscopy (cryo-EM) data collection strategy
to alleviate the effects of preferential speci-
men orientation on cryo-EM grids ( 22), which
allowed us to collect data on the apo form
of the HIV CSC intasome. The cryo-EM re-
construction of the HIV CSC intasome reveals
a twofold symmetric dodecameric molecular
assembly of IN. The highest resolution (~2.7 Å)
resides within the core containing the twocatalytic sites and the ends of vDNA (fig. S3
and table S1).
Lentiviral intasomes have a large degree of
heterogeneity and vary in size depending onthe protein and biochemical conditions, form-
ing tetramers, dodecamers, hexadecamers,
and proto-intasome stacks (figs. S4 and S5).
The basic underlying unit, the conserved in-
tasome core (CIC), resembles —but is not iden-
tical to —the tetrameric PFV intasome. The
CIC is composed of two IN dimers, each of
which binds one vDNA end and a C-terminal
domain (CTD) from a neighboring protomer
(23). In the cryo-EM reconstruction, four fully
defined IN protomers, two CTDs from flank-
ing protomers, and two additional CTDs from
distal subunits are clearly resolved (Fig. 1A);
these were used to build an atomic model(Fig. 1B). With the exception of the additional
CTDs from distal subunits, which are not
conserved in other retroviral species, the re-
solved regions constitute the intasome CIC.
Each of the two active sites in an HIV in-
tasome contains the catalytic residues Asp
64,
Asp116, and Glu152, forming the prototypical
DDE motif present in many nucleases, trans-
posases, and other INs ( 24). The regions near
the active sites of the PFV and HIV intasomes
a r es i m i l a rb e c a u s em a n yo ft h er e s i d u e sp a r -
ticipate in substrate binding and catalysis.
However, farther from the active sites, the
structures diverge (Fig. 1C and figs. S6 and S7).
The largest differences reside in the synaptic
CTD from the flanking protomer, specifically
the region around the loop spanning HIV IN
Arg228-Lys236.T h ec o r r e s p o n d i n gl o o pi nP F V
IN has four additional residues and assumes a
distinct configuration. Clinically relevant drug-
resistance mutations occur within regions of
HIV IN where the amino acid sequences be-
tween the two orthologs diverge ( 11,12).
To better understand how INSTIs interact
with HIV intasomes, we assembled the com-
plex with bictegravir ( BIC), a leading second-
generation INSTI and the most broadly potent of
all clinically approved INSTIs ( 25). We also ex-
amined the binding of additional compounds —
named 4f,4d,a n d4c, which contain a distinct
chelating core (Fig. 2A) —whose development
was motivated by the need to further improve
potency against drug-resistant variants ( 19,20).
Currently, 4dis a leading drug candidate that
shows improved efficacy over all clinically used
and developmental compounds against the
known drug-resistant variants ( 25,26)( f i g .S 8 ) .
Intasomes were coassembled and copurified
with INSTIs, and we verified their inhibitory
activity (fig. S9). The cryo-EM structures of
INSTI-bound CSCs extend to a comparable
~2.6 to 2.7 Å resolution near the active site,
which allows the derivation of atomic models
(figs. S10 to S12 and table S1).
INSTIs bind HIV CSCs within a well-defined
pocket, formed by the interface between two
IN protomers and vDNA. Several important
pharmacophores characterize the binding of
all INSTIs (Fig. 2, B and C). First, three cen-
tral electronegative heteroatoms chelate twoRESEARCH
Passos et al.,Science 367, 810 –814 (2020) 14 February 2020 1o f4
1The Salk Institute for Biological Studies, Laboratory of Genetics,
La Jolla, CA 92037, USA.2National Institutes of Health, National
Institute of Diabetes and Digestive Diseases, Bethesda, MD
20892, USA.3Center for Cancer Research, National Cancer
Institute, Frederick, MD 21702, USA.4Department of Integrative
Structural and Computational Biology, The Scripps Research
Institute, La Jolla, CA 92037, USA.
*These authors contributed equally to this work.
†Corresponding author. Email: dlyumkis@salk.edu
Downloaded from https://www.science.org at University of California San Diego on July 04, 2023
|
2404.01413v2.pdf | Is Model Collapse Inevitable? Breaking the Curse of
Recursion by Accumulating Real and Synthetic Data
Matthias Gerstgrasser∗†, Rylan Schaeffer∗, Apratim Dey∗, Rafael Rafailov∗, Dhruv Pai
Stanford University
{mgerst,rschaef,apd1995,rafailov,dhruvpai }@stanford.edu
Henry Sleight‡, John Hughes‡, Tomasz Korbak‡, Rajashree Agrawal‡
Constellation
Andrey Gromov
University of Maryland, College Park
Daniel A. Roberts
MIT & Sequoia Capital
Diyi Yang, David Donoho & Sanmi Koyejo
Stanford University
{diyiy,donoho,sanmi }@stanford.edu
Abstract
The proliferation of generative models, combined with pretraining on web-
scale data, raises a timely question: what happens when these models are
trained on their own generated outputs? Recent investigations into model-
data feedback loops proposed that such loops would lead to a phenomenon
termed model collapse , under which performance progressively degrades
with each model-data feedback iteration until fitted models become useless.
However, those studies largely assumed that new data replace old data over
time, where an arguably more realistic assumption is that data accumulate
over time. In this paper, we ask: what effect does accumulating data have
on model collapse? We empirically study this question by pretraining se-
quences of language models on text corpora. We confirm that replacing
the original real data by each generation’s synthetic data does indeed tend
towards model collapse, then demonstrate that accumulating the successive
generations of synthetic data alongside the original real data avoids model
collapse; these results hold across a range of model sizes, architectures, and
hyperparameters. We obtain similar results for deep generative models on
other types of real data: diffusion models for molecule conformation gener-
ation and variational autoencoders for image generation. To understand
why accumulating data can avoid model collapse, we use an analytically
tractable framework introduced by prior work in which a sequence of linear
models are fit to the previous models’ outputs. Previous work used this
framework to show that if data are replaced, the test error increases with
the number of model-fitting iterations; we extend this argument to prove
that if data instead accumulate, the test error has a finite upper bound
independent of the number of iterations, meaning model collapse no longer
occurs. Our work provides consistent empirical and theoretical evidence
that data accumulation avoids model collapse.
∗Denotes equal authorship.
†Harvard & Stanford University.
‡Denotes equal contribution.
1arXiv:2404.01413v2 [cs.LG] 29 Apr 2024 |
10.1038.s42004-024-01098-2.pdf | ARTICLE
Evolution shapes interaction patterns for epistasis
and speci fic protein binding in a two-component
signaling system
Zhiqiang Yan1& Jin Wang2✉
The elegant design of protein sequence/structure/function relationships arises from the
interaction patterns between amino acid positions. A central question is how evolutionaryforces shape the interaction patterns that encode long-range epistasis and binding speci ficity.
Here, we combined family-wide evolutionary analysis of natural homologous sequences andstructure-oriented evolution simulation for two-component signaling (TCS) system. Themagnitude-frequency relationship of coupling conservation between positions manifests apower-law-like distribution and the positions with highly coupling conservation are sparse butdistributed intensely on the binding surfaces and hydrophobic core. The structure-speci fic
interaction pattern involves further optimization of local frustrations at or near the bindingsurface to adapt the binding partner. The construction of family-wide conserved interactionpatterns and structure-speci fic ones demonstrates that binding speci ficity is modulated by
both direct intermolecular interactions and long-range epistasis across the binding complex.Evolution sculpts the interaction patterns via sequence variations at both family-wide andstructure-speci fic levels for TCS system.https://doi.org/10.1038/s42004-024-01098-2 OPEN
1Center for Theoretical Interdisciplinary Sciences, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang 325001, PR Ch ina.
2Department of Chemistry and Physics, State University of New York at Stony Brook, Stony Brook, NY 11790, USA.✉email: jin.wang.1@stonybrook.edu
COMMUNICATIONS CHEMISTRY | (2024) 7:13 | https://doi.org/10.1038/s42004-024-01098-2 | www.nature.com/commschem 11234567890():,; |
2305.10626.pdf | Language Models Meet World Models:
Embodied Experiences Enhance Language Models
Jiannan Xiang∗♠, Tianhua Tao∗♣, Yi Gu♠, Tianmin Shu♢△,
Zirui Wang♠, Zichao Yang♡, Zhiting Hu♠
♠UC San Diego,♣UIUC,♢MIT,△JHU,♡CMU
Abstract
While large language models (LMs) have shown remarkable capabilities across
numerous tasks, they often struggle with simple reasoning and planning in physical
environments, such as understanding object permanence or planning household ac-
tivities. The limitation arises from the fact that LMs are trained only on written text
and miss essential embodied knowledge and skills. In this paper, we propose a new
paradigm of enhancing LMs by finetuning them with world models , to gain diverse
embodied knowledge while retaining their general language capabilities. Our ap-
proach deploys an embodied agent in a world model, particularly a simulator of the
physical world (VirtualHome), and acquires a diverse set of embodied experiences
through both goal-oriented planning and random exploration. These experiences
are then used to finetune LMs to teach diverse abilities of reasoning and acting in
the physical world, e.g., planning and completing goals, object permanence and
tracking, etc.Moreover, it is desirable to preserve the generality of LMs during
finetuning, which facilitates generalizing the embodied knowledge across tasks
rather than being tied to specific simulations. We thus further introduce the classical
elastic weight consolidation (EWC) for selective weight updates, combined with
low-rank adapters (LoRA) for training efficiency. Extensive experiments show
our approach substantially improves base LMs on 18 downstream tasks by 64.28%
on average. In particular, the small LMs (1.3B, 6B, and 13B) enhanced by our
approach match or even outperform much larger LMs (e.g., ChatGPT).1
1 Introduction
Language Models (LMs) have demonstrated impressive performance on a wide range of natural
language processing tasks [ 34,48,4,7,54]. In particular, recent studies show that LMs can assist
decision-making for embodied tasks [ 1,18,25,45,19], demonstrating a certain level of understanding
of the physical world. However, such understanding is not robust enough for many reasoning and
planning tasks in physical environments. As shown in Figure 1, even the latest large LMs like
ChatGPT2can still make mistakes in seemingly simple inquiries, such as counting objects in a
location. We hypothesize that this is because current LMs trained merely with large-scale text corpora
are devoid of embodied experiences such as navigating in an environment, interacting with objects,
and sensing as well as tracking the world state. Consequently, they lack robust and comprehensive
embodied knowledge necessary for reasoning and planning associated with physical environments.
A related line of research finetunes LMs in order to improve specific embodied tasks, resulting in
task-specialized models [6, 58, 21, 57].
∗Equal contribution.
1The code is available at https://github.com/szxiangjn/world-model-for-language-model .
2Based on GPT-3.5-turbo.
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2305.10626v3 [cs.CL] 28 Oct 2023 |
2306.14892.pdf | Supervised Pretraining Can Learn
In-Context Reinforcement Learning
Jonathan N. Lee∗1Annie Xie∗1Aldo Pacchiano2Yash Chandak1
Chelsea Finn1Ofir Nachum3Emma Brunskill1
1Stanford University,2Microsoft Research,3Google DeepMind
Abstract
Large transformer models trained on diverse datasets have shown a remarkable
ability to learn in-context , achieving high few-shot performance on tasks they were
not explicitly trained to solve. In this paper, we study the in-context learning capa-
bilities of transformers in decision-making problems, i.e., reinforcement learning
(RL) for bandits and Markov decision processes. To do so, we introduce and study
Decision-Pretrained Transformer (DPT ), a supervised pretraining method where
the transformer predicts an optimal action given a query state and an in-context
dataset of interactions, across a diverse set of tasks. This procedure, while simple,
produces a model with several surprising capabilities. We find that the pretrained
transformer can be used to solve a range of RL problems in-context, exhibiting both
exploration online and conservatism offline, despite not being explicitly trained to
do so. The model also generalizes beyond the pretraining distribution to new tasks
and automatically adapts its decision-making strategies to unknown structure. The-
oretically, we show DPT can be viewed as an efficient implementation of Bayesian
posterior sampling, a provably sample-efficient RL algorithm. We further leverage
this connection to provide guarantees on the regret of the in-context algorithm
yielded by DPT, and prove that it can learn faster than algorithms used to generate
the pretraining data. These results suggest a promising yet simple path towards
instilling strong in-context decision-making abilities in transformers.
1 Introduction
For supervised learning, transformer-based models trained at scale have shown impressive abilities
to perform tasks given an input context, often referred to as few-shot prompting or in-context
learning [ 1]. In this setting, a pretrained model is presented with a small number of supervised input-
output examples in its context, and is then asked to predict the most likely completion (i.e. output)
of an unpaired input, without parameter updates. Over the last few years in-context learning has
been applied to solve a range of tasks [ 2] and a growing number works are beginning to understand
and analyze in-context learning for supervised learning [ 3,4,5,6]. In this work, our focus is to
study and understand in-context learning applied to sequential decision-making, specifically in the
context of reinforcement learning (RL) settings. Decision-making (e.g. RL) is considerably more
dynamic and complex than supervised learning. Understanding and leveraging in-context learning
here could potentially unlock significant improvements in an agent’s ability to adapt and make
few-shot decisions in response to observations from the world. Such capabilities are instrumental for
practical applications ranging from robotics to recommendation systems.
For in-context decision-making [ 7,8,9], rather than input-output tuples, the context takes the form
of state-action-reward tuples representing a dataset of interactions with an unknown environments.
∗Equal contribution.arXiv:2306.14892v1 [cs.LG] 26 Jun 2023 |
2405.03651v1.pdf | Published as a conference paper at ICLR 2024
ADAPTIVE RETRIEVAL AND SCALABLE INDEXING FOR
k-NN S EARCH WITH CROSS -ENCODERS
Nishant Yadav1∗, Nicholas Monath2, Manzil Zaheer2, Rob Fergus2, Andrew McCallum1
1University of Massachusetts Amherst,2Google DeepMind
ABSTRACT
Cross-encoder (CE) models which compute similarity by jointly encoding a
query-item pair perform better than using dot-product with embedding-based
models (dual-encoders) at estimating query-item relevance. Existing approaches
perform k-NN search with cross-encoders by approximating the CE similarity
with a vector embedding space fit either with dual-encoders (DE) or CUR matrix
factorization. DE-based retrieve-and-rerank approaches suffer from poor recall as
DE generalizes poorly to new domains and the test-time retrieval with DE is de-
coupled from the CE. While CUR-based approaches can be more accurate than the
DE-based retrieve-and-rerank approach, such approaches require a prohibitively
large number of CE calls to compute item embeddings, thus making it imprac-
tical for deployment at scale. In this paper, we address these shortcomings with
our proposed sparse-matrix factorization based method that efficiently computes
latent query and item representations to approximate CE scores and performs k-
NN search with the approximate CE similarity. In an offline indexing stage, we
compute item embeddings by factorizing a sparse matrix containing query-item
CE scores for a set of train queries. Our method produces a high-quality ap-
proximation while requiring only a fraction of CE similarity calls as compared to
CUR-based methods, and allows for leveraging DE models to initialize the em-
bedding space while avoiding compute- and resource-intensive finetuning of DE
via distillation. At test time, we keep item embeddings fixed and perform retrieval
over multiple rounds, alternating between a) estimating the test query embedding
by minimizing error in approximating CE scores of items retrieved thus far, and
b) using the updated test query embedding for retrieving more items in the next
round. Our proposed k-NN search method can achieve up to 5% and 54% im-
provement in k-NN recall for k= 1 and 100 respectively over the widely-used
DE-based retrieve-and-rerank approach. Furthermore, our proposed approach to
index the items by aligning item embeddings with the CE achieves up to 100 ×
and 5×speedup over CUR-based and dual-encoder distillation based approaches
respectively while matching or improving k-NN search recall over baselines.
1 I NTRODUCTION
Efficient and accurate nearest neighbor search is paramount for retrieval (Menon et al., 2022; Rosa
et al., 2022; Qu et al., 2021), classification in large output spaces (e.g., entity linking (Ayoola et al.,
2022; Logeswaran et al., 2019; Wu et al., 2020)), non-parametric models (Das et al., 2022; Wang
et al., 2022), and many other such applications in machine learning (Goyal et al., 2022; Izacard
et al., 2023; Bahri et al., 2020). The accuracy and efficiency of nearest neighbor search depends
on a combination of factors (1) the computational cost of pairwise distance comparisons between
datapoints, (2) preprocessing time for constructing a nearest neighbor index (e.g., dimensionality
reduction (Indyk, 2000), quantization (Ge et al., 2013; Guo et al., 2020), data structure construc-
tion (Beygelzimer et al., 2006; Malkov & Yashunin, 2018; Zaheer et al., 2019)), and (3) the time
taken to query the index to retrieve the nearest neighbor(s).
Similarity functions such as cross-encoders which take a pair of data points as inputs and directly
output a scalar score, have achieved state-of-the-art results on numerous tasks (e.g., QA (Qu et al.,
∗Now at Google DeepMind
1arXiv:2405.03651v1 [cs.IR] 6 May 2024 |
1907.05600.pdf | Generative Modeling by Estimating Gradients of the
Data Distribution
Yang Song
Stanford University
yangsong@cs.stanford.eduStefano Ermon
Stanford University
ermon@cs.stanford.edu
Abstract
We introduce a new generative model where samples are produced via Langevin
dynamics using gradients of the data distribution estimated with score matching.
Because gradients can be ill-defined and hard to estimate when the data resides on
low-dimensional manifolds, we perturb the data with different levels of Gaussian
noise, and jointly estimate the corresponding scores, i.e., the vector fields of
gradients of the perturbed data distribution for all noise levels. For sampling, we
propose an annealed Langevin dynamics where we use gradients corresponding to
gradually decreasing noise levels as the sampling process gets closer to the data
manifold. Our framework allows flexible model architectures, requires no sampling
during training or the use of adversarial methods, and provides a learning objective
that can be used for principled model comparisons. Our models produce samples
comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new
state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate
that our models learn effective representations via image inpainting experiments.
1 Introduction
Generative models have many applications in machine learning. To list a few, they have been
used to generate high-fidelity images [ 26,6], synthesize realistic speech and music fragments [ 58],
improve the performance of semi-supervised learning [ 28,10], detect adversarial examples and
other anomalous data [ 54], imitation learning [ 22], and explore promising states in reinforcement
learning [ 41]. Recent progress is mainly driven by two approaches: likelihood-based methods [ 17,
29,11,60] and generative adversarial networks (GAN [ 15]). The former uses log-likelihood (or a
suitable surrogate) as the training objective, while the latter uses adversarial training to minimize
f-divergences [40] or integral probability metrics [2, 55] between model and data distributions.
Although likelihood-based models and GANs have achieved great success, they have some intrinsic
limitations. For example, likelihood-based models either have to use specialized architectures to
build a normalized probability model ( e.g., autoregressive models, flow models), or use surrogate
losses ( e.g., the evidence lower bound used in variational auto-encoders [ 29], contrastive divergence
in energy-based models [ 21]) for training. GANs avoid some of the limitations of likelihood-based
models, but their training can be unstable due to the adversarial training procedure. In addition, the
GAN objective is not suitable for evaluating and comparing different GAN models. While other
objectives exist for generative modeling, such as noise contrastive estimation [ 19] and minimum
probability flow [50], these methods typically only work well for low-dimensional data.
In this paper, we explore a new principle for generative modeling based on estimating and sampling
from the (Stein) score [33] of the logarithmic data density, which is the gradient of the log-density
function at the input data point. This is a vector field pointing in the direction where the log data
density grows the most. We use a neural network trained with score matching [ 24] to learn this
vector field from data. We then produce samples using Langevin dynamics, which approximately
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.arXiv:1907.05600v3 [cs.LG] 10 Oct 2020 |
2301.13196.pdf | Looped Transformers as Programmable Computers
Angeliki Giannouw*, Shashank Rajputw∗, Jy-yong Sohnw,
Kangwook Leew, Jason D. Leep, Dimitris Papailiopoulosw
pPrinceton University
wUniversity of Wisconsin-Madison
January 31, 2023
Abstract
We present a framework for using transformer networks as universal computers by program-
ming them with specific weights and placing them in a loop. Our input sequence acts as a
punchcard, consisting of instructions and memory for data read/writes. We demonstrate that
a constant number of encoder layers can emulate basic computing blocks, including embed-
ding edit operations, non-linear functions, function calls, program counters, and conditional
branches. Using these building blocks, we emulate a small instruction-set computer. This
allows us to map iterative algorithms to programs that can be executed by a looped, 13-layer
transformer. We show how this transformer, instructed by its input, can emulate a basic
calculator, a basic linear algebra library, and in-context learning algorithms that employ back-
propagation. Our work highlights the versatility of the attention mechanism, and demonstrates
that even shallow transformers can execute full-fledged, general-purpose programs.
1 Introduction
Transformers (TFs) have become a popular choice for a wide range of machine learning tasks,
achieving state-of-the-art results in fields such as natural language processing and computer
vision [Vaswani et al., 2017, Khan et al., 2022, Yuan et al., 2021, Dosovitskiy et al., 2020]. One
key reason for their success is their ability to capture higher-order relationships and long-range
dependencies across tokens, through attention. This allows TFs to model contextual information
and makes them effective in tasks such as machine translation and language modeling, where
they have consistently outperformed other methods [Vaswani et al., 2017, Kenton and Toutanova,
2019].
Language models with billions of parameters, such as GPT-3 (175B parameters Brown et al.
[2020]) and PaLM (540B parameters Chowdhery et al. [2022]), have achieved state-of-the-art
*Equal contribution. The title of this paper was not created by a transformer, but we can’t guarantee the same for
this footnote.
1arXiv:2301.13196v1 [cs.LG] 30 Jan 2023 |
1109.2146.pdf | Journal of Artificial Intelligence Research 24 (2005) 1-48 S ubmitted 11/04; published 07/05
CIXL2: A Crossover Operator for Evolutionary Algorithms
Based on Population Features
Domingo Ortiz-Boyer dortiz@uco.es
C´ esar Herv´ as-Mart´ ınez chervas@uco.es
Nicol´ as Garc´ ıa-Pedrajas npedrajas@uco.es
Department of Computing and Numerical Analysis
University of C´ ordoba, Spain
Abstract
In this paper we propose a crossover operator for evolutiona ry algorithms with real
values that is based on the statistical theory of population distributions. The operator is
based on the theoretical distribution of the values of the ge nes of the best individuals in
the population. The proposed operator takes into account th e localization and dispersion
features of the best individuals of the population with the o bjective that these features
would be inherited by the offspring. Our aim is the optimizati on of the balance between
exploration and exploitation in the search process.
In order to test the efficiency and robustness of this crossove r, we have used a set of
functions to be optimized with regard to different criteria, such as, multimodality, sep-
arability, regularity and epistasis. With this set of funct ions we can extract conclusions
in function of the problem at hand. We analyze the results usi ng ANOVA and multiple
comparison statistical tests.
As an example of how our crossover can be used to solve artifici al intelligence problems,
we have applied the proposed model to the problem of obtainin g the weight of each network
in a ensemble of neural networks. The results obtained are ab ove the performance of
standard methods.
1. Introduction
Evolutionary algorithms (EAs) are general purpose searchi ng methods. The selection pro-
cess and the crossover and mutation operators establish a ba lance between the exploration
and exploitation of the search space which is very adequate f or a wide variety of problems
whose solution presents difficulties that are insolvable usi ng classical methods. Most of
these problems are defined in continuous domains, so the evol utionary algorithms applied
use real values, namely, evolution strategies (EPs), real- coded genetic algorithms (RCGAs),
and evolutionary programming (EP). For these paradigms the precision of the solution does
not depend on the coding system, as in binary coded genetic al gorithms, but on the precision
of the computer system where the algorithms are run.
The selection process drives the searching towards the regi ons of the best individuals.
The mutation operator randomly modifies, with a given probab ility, one or more genes of a
chromosome, thus increasing the structural diversity of th e population. As we can see, it is
clearly an exploration operator, that helps to recover the g enetic diversity lost during the
selection phase and to explore new solutions avoiding prema ture convergence. In this way,
the probability of reaching a given point in the search space is never zero. This operator,
c⃝2005 AI Access Foundation. All rights reserved. |
1804.00746v4.pdf | The Simple Essence of Automatic Differentiation
Extended version∗
Conal Elliott
Target
conal@conal.net
March, 2018
Abstract
Automatic differentiation (AD) in reverse mode (RAD) is a central component of deep learning and
other uses of large-scale optimization. Commonly used RAD algorithms such as backpropagation, however,
are complex and stateful, hindering deep understanding, improvement, and parallel execution. This paper
develops a simple, generalized AD algorithm calculated from a simple, natural specification. The general
algorithm is then specialized by varying the representation of derivatives. In particular, applying well-known
constructions to a naive representation yields two RAD algorithms that are far simpler than previously known.
In contrast to commonly used RAD implementations, the algorithms defined here involve no graphs, tapes,
variables, partial derivatives, or mutation. They are inherently parallel-friendly, correct by construction, and
usable directly from an existing programming language with no need for new data types or programming
style, thanks to use of an AD-agnostic compiler plugin.
1 Introduction
Accurate, efficient, and reliable computation of derivatives has become increasingly important over the last several
years, thanks in large part to the successful use of backpropagation in machine learning, including multi-layer
neural networks, also known as “deep learning” [Lecun et al., 2015; Goodfellow et al., 2016]. Backpropagation
is a specialization and independent invention of the reverse mode of automatic differentiation (AD) and is
used to tune a parametric model to closely match observed data, using gradient descent (orstochastic gradient
descent). Machine learning and other gradient-based optimization problems typically rely on derivatives of
functions with very high dimensional domains and a scalar codomain—exactly the conditions under which reverse-
mode AD is much more efficient than forward-mode AD (by a factor proportional to the domain dimension).
Unfortunately, while forward-mode AD (FAD) is easily understood and implemented, reverse-mode AD (RAD)
and backpropagation have had much more complicated explanations and implementations, involving mutation,
graph construction and traversal, and “tapes” (sequences of reified, interpretable assignments, also called “traces”
or “Wengert lists”). Mutation, while motivated by efficiency concerns, makes parallel execution difficult and
so undermines efficiency as well. Construction and interpretation (or compilation) of graphs and tapes also
add execution overhead. The importance of RAD makes its current complicated and bulky implementations
especially problematic. The increasingly large machine learning (and other optimization) problems being solved
with RAD (usually via backpropagation) suggest the need to find more streamlined, efficient implementations,
especially with the massive hardware parallelism now readily and inexpensively available in the form of graphics
processors (GPUs) and FPGAs.
Another difficulty in the practical application of AD in machine learning (ML) comes from the nature of many
currently popular ML frameworks, including Caffe [Jia et al., 2014], TensorFlow [Abadi et al., 2016], and Keras
[Chollet, 2016]. These frameworks are designed around the notion of a “graph” (or “network”) of interconnected
nodes, each of which represents a mathematical operation—a sort of data flow graph. Application programs
∗The appendices of this extended version include proofs omitted in the conference article [Elliott, 2018].
1arXiv:1804.00746v4 [cs.PL] 2 Oct 2018 |
2302.08582.pdf | Pretraining Language Models with Human Preferences
Tomasz Korbak1 2 3Kejian Shi2Angelica Chen2Rasika Bhalerao4Christopher L. Buckley1Jason Phang2
Samuel R. Bowman2 5Ethan Perez2 3 5
Abstract
Language models (LMs) are pretrained to imitate
internet text, including content that would vio-
late human preferences if generated by an LM:
falsehoods, offensive comments, personally iden-
tifiable information, low-quality or buggy code,
and more. Here, we explore alternative objectives
for pretraining LMs in a way that also guides them
to generate text aligned with human preferences.
We benchmark five objectives for pretraining with
human feedback across three tasks and study how
they affect the trade-off between alignment and
capabilities of pretrained LMs. We find a Pareto-
optimal and simple approach among those we ex-
plored: conditional training, or learning distribu-
tion over tokens conditional on their human prefer-
ence scores given by a reward model. Conditional
training reduces the rate of undesirable content by
up to an order of magnitude, both when generat-
ing without a prompt and with an adversarially-
chosen prompt. Moreover, conditional training
maintains the downstream task performance of
standard LM pretraining, both before and after
task-specific finetuning. Pretraining with human
feedback results in much better preference sat-
isfaction than standard LM pretraining followed
by finetuning with feedback, i.e., learning and
then unlearning undesirable behavior. Our results
suggest that we should move beyond imitation
learning when pretraining LMs and incorporate
human preferences from the start of training.
1. Introduction
Language models (LMs) are trained to imitate text from
large and diverse datasets. These datasets often contain
1University of Sussex2New York University3FAR AI
4Northeastern University5Anthropic. Correspondence to:
Tomasz Korbak <tomasz.korbak@gmail.com >, Ethan Perez
<ethan@anthropic.com >.
Proceedings of the 40thInternational Conference on Machine
Learning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
0 1.6B 3.3B
Tokens seen0.0010.010.1Toxicity scoreConventional LM pretraining
Pretraining with feedback
Finetuning with feedback for 1.6B tokens
Finetuning with feedback for 330M tokens
Figure 1: Toxicity score (lower is better) of LMs pretrained
with the standard objective (solid blue), using conditional
training (solid orange) and LMs finetuned using conditional
training for 1.6B (orange dashed) and 330M tokens (orange
dotted). Pretraining with Human Feedback (PHF) reduces
the amount of offensive content much more effectively than
finetuning with human feedback.
content that violates human preferences, e.g., falsehoods
(Lin et al., 2022), offensive comments (Gehman et al., 2020),
personally identifiable information (PII; Carlini et al., 2020)
or low-quality code (Chen et al., 2021b). Imitating such
data stands in stark contrast with the behavior people desire
from language models, e.g., to generate text that is helpful,
honest and harmless (Askell et al., 2021). In this paper, we
explore alternative objectives for pretraining LMs on large
amounts of diverse data that guide them to generate text
aligned with human preferences.
Prior work on aligning LMs with human preferences almost
exclusively focused on making adjustments to pretrained
LMs. A widely adopted strategy of adding safety filters on
top of pretrained LMs (Xu et al., 2020) works only to an ex-
tent: even the most effective safety filters fail to catch a large
amount of undesirable content (Gehman et al., 2020; Welbl
et al., 2021; Ziegler et al., 2022). Another approach involves
finetuning LMs using either supervised learning on curated
data (Solaiman & Dennison, 2021; Scheurer et al., 2023)
or reinforcement learning from human feedback (RLHF;
Ziegler et al., 2019; Ouyang et al., 2022; Bai et al., 2022;
Menick et al., 2022), but this strategy is also limited by the
fact that large LMs are quite resistant to forgetting their train-
ing data (an effect that increases with model size; Carlini
et al., 2022; Vu et al., 2022; Ramasesh et al., 2022). While
1arXiv:2302.08582v2 [cs.CL] 14 Jun 2023 |
10.1101.2024.02.06.579080.pdf | Direct Coupling Analysis and the Attention Mechanism 1
Francesco Caredda1†and Andrea Pagnani1,2,3†
2
1DISAT, Politecnico di Torino, Corso Duca degli Abruzzi, 24, I-10129, Torino, Italy 3
2Italian Institute for Genomic Medicine, IRCCS Candiolo, SP-142, I-10060, 4
Candiolo, Italy 5
3INFN, Sezione di Torino, Torino, Via Pietro Giuria, 1 10125 Torino Italy 6
Abstract 7
Proteins serve as the foundation for nearly all biological functions within cells, encompassing 8
roles in transport, signaling, enzymatic activity, and more. Their functionalities hinge signifi- 9
cantly on their intricate three-dimensional structures, often posing challenges in terms of diffi- 10
culty, time, and expense for accurate determination. The introduction of AlphaFold 2 marked 11
a groundbreaking solution to the enduring challenge of predicting a protein’s tertiary structure 12
from its amino acid sequence. However, the inherent complexity of AlphaFold’s architecture 13
presents obstacles in deciphering its learning process and understanding the decision-making 14
that ultimately shapes the protein’s final structure. 15
In this study, we introduce a shallow, unsupervised model designed to understand the self- 16
attention layer within the Evoformer block of AlphaFold. We establish a method based on Direct 17
Coupling Analysis (DCA), wherein the interaction tensor undergoes decomposition, leveraging 18
the same structure employed in Transformer architectures. The model’s parameters, notably 19
fewer than those in standard DCA, are interpretable through an examination of the resulting 20
attention matrices. These matrices enable the extraction of contact information, subsequently 21
utilized for constructing the contact map of a protein family. Additionally, the self-attention 22
decomposition in the DCA Hamiltonian form adopted here facilitates the definition of multi- 23
family learning architecture, enabling the inference of parameter sets shared across diverse 24
protein families. Finally, an autoregressive generative version of the model is implemented, 25
capable of efficiently generating new proteins in silico. This generative model reproduces the 26
summary statistics of the original protein family while concurrently inferring direct contacts in 27
the tertiary structure of the protein. The effectiveness of our Attention-Based DCA architecture 28
is evaluated using Multiple Sequence Alignments (MSAs) of varying lengths and depths, with 29
structural data sourced from the Pfam database. 30
1 Introduction 31
Proteins constitute a diverse category of biological compounds constructed from a set of 20 amino 32
acids. Within an organism, they serve various functions, including structural support, mobility, 33
1. CC-BY 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted February 8, 2024. ; https://doi.org/10.1101/2024.02.06.579080doi: bioRxiv preprint |
IN-Tetramer-manuscript-merged-with-figures-bioRxiv.pdf | 1 Oligomeric HIV-1 Integrase Structures Reveal Functional Plasticity for Intasome Assembly and RNA Binding Tao Jing1‡, Zelin Shan1‡, Tung Dinh4, Avik Biswas1, Sooin Jang5,6, Juliet Greenwood5, Min Li7, Zeyuan Zhang1, Gennavieve Gray1, Hye Jeong Shin1, Bo Zhou1, Dario Passos1, Sriram Aiyer1, Zhen Li5, Robert Craigie7, Alan N. Engelman5,6, Mamuka Kvaratskhelia4, Dmitry Lyumkis1,2,3,* 1. The Salk Institute for Biological Studies, La Jolla, CA, 92037, USA. 2. Department of Integrative Structural and Computational Biology, The Scripps Research Institute, La Jolla, CA, 92037, USA. 3. Graduate School of Biological Sciences, Section of Molecular Biology, University of California San Diego, La Jolla, CA 92093, USA. 4. Division of Infectious Diseases, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora, CO 80045, USA 5. Department of Cancer Immunology and Virology, Dana-Farber Cancer Institute, Boston, MA 02215, USA 6. Department of Medicine, Harvard Medical School, Boston, MA 02115, USA 7. National Institutes of Health, National Institute of Diabetes and Digestive Diseases, Bethesda, MD, 20892, USA ‡ These authors contributed equally to this work * Correspondence: Dmitry Lyumkis (dlyumkis@salk.edu) |
2402.07871.pdf | SCALING LAWS FOR FINE-GRAINED
MIXTURE OF EXPERTS
Jakub Krajewski∗
University of Warsaw
IDEAS NCBRJan Ludziejewski∗
University of Warsaw
IDEAS NCBRKamil Adamczewski
IDEAS NCBRMaciej Pi ´oro
IPPT PAN
IDEAS NCBR
Michał Krutul
University of Warsaw
IDEAS NCBRSzymon Antoniak
University of Warsaw
IDEAS NCBRKamil Ciebiera
University of Warsaw
IDEAS NCBRKrystian Kr ´ol
University of Warsaw
IDEAS NCBR
Tomasz Odrzyg ´o´zd´z
TradeLinkPiotr Sankowski
University of Warsaw
IDEAS NCBRMarek Cygan
University of Warsaw
NomagicSebastian Jaszczur∗
University of Warsaw
IDEAS NCBR
ABSTRACT
Mixture of Experts (MoE) models have emerged as a primary solution for reducing
the computational cost of Large Language Models. In this work, we analyze their
scaling properties, incorporating an expanded range of variables. Specifically, we
introduce a new hyperparameter, granularity, whose adjustment enables precise
control over the size of the experts. Building on this, we establish scaling laws for
fine-grained MoE, taking into account the number of training tokens, model size,
and granularity. Leveraging these laws, we derive the optimal training configuration
for a given computational budget. Our findings not only show that MoE models
consistently outperform dense Transformers but also highlight that the efficiency
gap between dense and MoE models widens as we scale up the model size and
training budget. Furthermore, we demonstrate that the common practice of setting
the size of experts in MoE to mirror the feed-forward layer is not optimal at almost
any computational budget.
1 I NTRODUCTION
In recent years, we have witnessed Large Language Models (LLMs) achieve exceptional performance
in tasks across numerous domains (Chowdhery et al., 2022; Yin et al., 2023; Agostinelli et al., 2023).
However, training those massive models incurs high computational costs, measured in millions
of GPU-hours (Touvron et al., 2023b), enabled only by enormous budgets (Scao et al., 2023) and
leading to non-negligible carbon footprints (Faiz et al., 2024). To combat these obstacles, the research
community has been striving to increase the efficiency of LLMs. One promising approach that has
lately been gaining visibility is the use of Mixture of Experts (MoE) methods. Models such as Switch
(Fedus et al., 2022) and Mixtral (Jiang et al., 2024) have already demonstrated that it is possible to
achieve comparable effectiveness with significantly lower computational costs.
In the context of the current trend of increasing budgets for training language models, a question
arises: will MoE models continue to be attractive in the future? This is an important issue, as other
studies have stated that the gap in efficiency between MoE and standard Transformers narrows at
Contributions: Jakub implemented fine-grained MoE, ran experiments, and oversaw the course of the project.
Jan designed and implemented the scaling laws, also optimized and tuned the fine-grained MoE implementation.
Kamil A. provided significant advice on many aspects of the project. Maciej experimented with the block design
and, with Michał, provided considerable technical support. Szymon, Kamil C., Krystian, and Tomasz contributed
to the project and the engineering in various ways. Marek, along with Piotr, provided high-level scientific advice.
Sebastian came up with the initial idea, started the project, and supervised it while setting the research direction
and leading experiments and analyses. Correspondence to <s.jaszczur@uw.edu.pl >.∗Equal contribution.
1arXiv:2402.07871v1 [cs.LG] 12 Feb 2024 |
2105.14111.pdf | Goal Misgeneralization in Deep Reinforcement Learning
Lauro Langosco* 1Jack Koch*Lee Sharkey* 2Jacob Pfau3Laurent Orseau4David Krueger1
Abstract
We study goal misgeneralization , a type of out-
of-distribution generalization failure in reinforce-
ment learning (RL). Goal misgeneralization oc-
curs when an RL agent retains its capabilities out-
of-distribution yet pursues the wrong goal. For
instance, an agent might continue to competently
avoid obstacles, but navigate to the wrong place.
In contrast, previous works have typically focused
on capability generalization failures, where an
agent fails to do anything sensible at test time.
We formalize this distinction between capability
and goal generalization, provide the first empiri-
cal demonstrations of goal misgeneralization, and
present a partial characterization of its causes.
1. Introduction
Out-of-distribution (OOD) generalization, performing well
on test data that is not distributed identically to the training
set, is a fundamental problem in machine learning (Arjovsky,
2021). OOD generalization is crucial since in many applica-
tions it is not feasible to collect data distributed identically
to that which the model will encounter in deployment.
In this work, we focus on a particularly concerning type of
generalization failure that can occur in RL. When an RL
agent is deployed out of distribution, it may simply fail to
take useful actions. However, there exists an alternative
failure mode in which the agent pursues a goal other than
the training reward while retaining the capabilities it had on
the training distribution. For example, an agent trained to
pursue a fixed coin might not recognize the coin when it is
positioned elsewhere, and instead competently navigate to
the wrong position (Figure 1). We call this kind of failure
goal misgeneralization1and distinguish it from capabil-
*Equal contribution1University of Cambridge2University of
T¨ubingen3University of Edinburgh4DeepMind, London. Corre-
spondence to: Lauro Langosco <langosco.lauro@gmail.com >.
Proceedings of the 39thInternational Conference on Machine
Learning , Baltimore, Maryland, USA, PMLR 162, 2022. Copy-
right 2022 by the author(s).
1We adopt this term from Shah et al. (2022). A previous version
of our work used the term ‘objective robustness failure’ instead. We
(a) Goal position fixed (b) Goal position randomizedFigure 1. (a)At training time, the agent learns to reliably reach the
coin which is always located at the end of the level. (b)However,
when the coin position is randomized at test time, the agent still
goes towards the end of the level and often skips the coin. The
agent’s capability for solving the levels generalizes, but its goal of
collecting coins does not.
ity generalization failures. We provide the first empirical
demonstrations of goal misgeneralization to highlight and
illustrate this phenomenon.
While it is well-known that the true reward function can
be unidentifiable in inverse reinforcement learning (Amin
& Singh, 2016), our work shows that a similar problem
can also occur in reinforcement learning when features of
the environment are correlated and predictive of the reward
on the training distribution but not OOD. In this way, goal
misgeneralization can also resemble problems that arise in
supervised learning when models use unreliable features:
both problems are a form of competent misgeneralization
that works in-distribution but fails OOD. Disentangling ca-
pability and goal generalization failures is difficult in su-
pervised learning; for instance, are adversarial examples
bugs or features (Ilyas et al., 2019)? In contrast, studying
RL allows us to formally distinguish capabilities and goals,
which roughly correspond to understanding the environment
dynamics and the reward function, respectively.
Goal misgeneralization might be more dangerous than ca-
pability generalization failures, since an agent that capably
pursues an incorrect goal can leverage its capabilities to visit
arbitrarily bad states (Zhuang & Hadfield-Menell, 2021). In
contrast, the only risks from capability generalization fail-
ures are those of accidents due to incompetence.
use the term ‘goal’ to refer to goal-directed (optimizing) behavior,
notjust goal-states in MDPs.arXiv:2105.14111v7 [cs.LG] 9 Jan 2023 |
2402.09727.pdf | 2024-02-14
A Human-Inspired Reading Agent with Gist
Memory of Very Long Contexts
Kuang-Huei Lee1, Xinyun Chen1, Hiroki Furuta1, John Canny1and Ian Fischer2
1Google DeepMind,2Google Research
Correspond to: {leekh, iansf}@google.com; Author contributions are stated in Appendix J.
Website: read-agent.github.io
Current Large Language Models (LLMs) are not only limited to some maximum context length, but also
are not able to robustly consume long inputs. To address these limitations, we propose ReadAgent,
an LLM agent system that increases effective context length up to 20×in our experiments. Inspired
by how humans interactively read long documents, we implement ReadAgent as a simple prompting
system that uses the advanced language capabilities of LLMs to (1) decide what content to store together
in a memory episode, (2) compress those memory episodes into short episodic memories called gist
memories , and (3) take actions to look up passages in the original text if ReadAgent needs to remind
itself of relevant details to complete a task. We evaluate ReadAgent against baselines using retrieval
methods, using the original long contexts, and using the gist memories. These evaluations are performed
on three long-document reading comprehension tasks: QuALITY, NarrativeQA, and QMSum. ReadAgent
outperforms the baselines on all three tasks while extending the effective context window by 3−20×.
1. Introduction
Transformer-based Large Language Models (LLMs)
are highly capable of language understanding, but the
amount of text that LLMs are able to read at one time
is constrained. Not only is there an explicit context
length limitation, but it has also been found that per-
formance of LLMs tends to decline with increasingly
long inputs even when they don’t actually exceed the
explicit context window [ 25,37]. In contrast, humans
can read, understand, and reason over very long texts,
such as a series of interrelated books.
Wepositthatanunderlyingreasonforthisgapisinher-
ent in the differences in reading approaches. Typically,
weuseLLMstoconsumetheexactgivencontentword-
by-word and the process is relatively passive. On the
other hand, humans read and reason over long text
differently. First, the exact information tends to be
forgottenquickly, whereasthefuzziergistinformation,
i.e. the substance irrespective of exact words, from
past readings lasts much longer [ 34,31,33]1. Second,
human reading is an interactive process. When we
need to remind ourselves of relevant details in order
to complete a task, such as answering a question, we
look them up in the original text.
1Fuzzy-trace theory [ 34] posits that people form two types of
memory representations about a past event – verbatim and gist
memories. Gist memories, often episodic, are fuzzy memories of
past events, whereas verbatim memories contain details of past
events. People prefer to reason with gists rather than with verbatim
memories [32].
A very very long
text ……
…………………
…………………
…………………
…………………
…………………
…………………
…………………
………………… 1. Episode
Pagination page 1
page 2
page 3
page N
[page 1] gist
[page 2] gist (Episodic) Gist Memory
[page N] gist 2. Gisting
Q: Why did John … ?
3. Lookup
Figure 1|ReadAgent workflow.
We think that using the fuzzy gist memory to capture
global context and attending to local details together
enables humans to reason over very long context effi-
ciently, in terms of how much information to process
at once, and is also important for comprehension. For
example, if we were to infer the intention of a fictional
character’s specific action described on a page in a
novel, besides focusing on the surrounding pages, we
likely also need to understand the overall story and
©2024 Google DeepMind. All rights reservedarXiv:2402.09727v1 [cs.CL] 15 Feb 2024 |
2402.09900.pdf | Revisiting Recurrent Reinforcement Learning with Memory Monoids
Steven Morad1Chris Lu2Ryan Kortvelesy1Stephan Liwicki3Jakob Foerster2Amanda Prorok1
Abstract
Memory models such as Recurrent Neural Net-
works (RNNs) and Transformers address Par-
tially Observable Markov Decision Processes
(POMDPs) by mapping trajectories to latent
Markov states. Neither model scales particularly
well to long sequences, especially compared to
an emerging class of memory models sometimes
called linear recurrent models. We discover that
we can model the recurrent update of these mod-
els using a monoid , leading us to reformulate
existing models using a novel memory monoid
framework. We revisit the traditional approach
to batching in recurrent RL, highlighting both
theoretical and empirical deficiencies. We lever-
age the properties of memory monoids to pro-
pose a batching method that improves sample ef-
ficiency, increases the return, and simplifies the
implementation of recurrent loss functions in RL.
1. Introduction
Reinforcement learning (RL) focuses on solving Markov
Decision Processes (MDPs), although in many interesting
problems we cannot access the Markov state directly. Out-
side of simulators, we instead receive noisy or ambiguous
observations , resulting in Partially Observable MDPs. The
standard approach to RL under partial observability is to
summarize a sequence of observations into a latent Markov
state using a memory model (sometimes called a sequence
model). Often, these memory models are either RNNs or
Transformers.
Unfortunately, it is expensive to train Transformers or
RNNs over long sequences. Instead, prior work often
truncates then zero-pads observation sequences into fixed
length segments , keeping the maximum sequence length
short. Using segments adds implementation complexity,
1Department of Computer Science and Technology, University
of Cambridge2Department of Engineering Science, University of
Oxford3Toshiba Europe Ltd.. Correspondence to: Steven Morad
<sm2558@cam.ac.uk >.
Code available at https://github.com/proroklab/
memory-monoids . Copyright 2024 by the author(s).reduces efficiency, and introduces theoretical issues. That
said, most prior work and virtually all existing RL libraries
follow this segment-based approach.
A new class of sequence models, sometimes called Linear
Recurrent Models or Linear Transformers, is much more
efficient over long sequences. These models can be paral-
lelized over the sequence dimension while retaining sub-
quadratic space complexity. We find that many such mod-
els can be rewritten as a memory monoid , a unifying frame-
work for efficient memory models that we define in this
paper. Since these efficient models do not share sequence
length limitations with past models, we question whether
the use of segments is still necessary .
Contributions In this work, we propose a unifying
framework for efficient memory modeling, then propose an
alternative batching method reliant on our framework. Our
method improves sample efficiency across various tasks
and memory models, while generally simplifying imple-
mentation.
1. We propose the memory monoid , a unifying frame-
work for efficient sequence models. In particular, we
• Reformulate existing sequence models as mem-
ory monoids
• Derive memory monoids for the discounted re-
turn and advantage, leveraging GPU parallelism
• Discover a method for inline resets, enabling any
memory monoid to span multiple episodes
2. We investigate the impact that segments have on rein-
forcement learning. Specifically, we
• Highlight theoretical shortcomings of sequence
truncation and padding, then demonstrate their
empirical impact
• Propose a batching method that improves sam-
ple efficiency across all tested models and tasks,
while also simplifying recurrent loss functions
2. Preliminaries and Background
Consider an MDP (S, A, R, T, γ), where at each timestep
t, an agent produces a transition T= (s, a, r, s′)from in-
teraction with the environment. We let s, s′∈Sdenote
1arXiv:2402.09900v2 [cs.LG] 17 Mar 2024 |
2305.11841.pdf | How Does Generative Retrieval Scale to Millions of Passages?
Ronak Pradeep∗†§, Kai Hui∗, Jai Gupta, Adam D. Lelkes, Honglei Zhuang
Jimmy Lin§, Donald Metzler, Vinh Q. Tran∗
Google Research,§University of Waterloo
rpradeep@uwaterloo.ca ,{kaihuibj,vqtran}@google.com
Abstract
Popularized by the Differentiable Search In-
dex, the emerging paradigm of generative re-
trieval re-frames the classic information re-
trieval problem into a sequence-to-sequence
modeling task, forgoing external indices and
encoding an entire document corpus within
a single Transformer. Although many differ-
ent approaches have been proposed to improve
the effectiveness of generative retrieval, they
have only been evaluated on document cor-
pora on the order of 100k in size. We conduct
the first empirical study of generative retrieval
techniques across various corpus scales, ulti-
mately scaling up to the entire MS MARCO
passage ranking task with a corpus of 8.8M
passages and evaluating model sizes up to 11B
parameters. We uncover several findings about
scaling generative retrieval to millions of pas-
sages; notably, the central importance of using
synthetic queries as document representations
during indexing, the ineffectiveness of existing
proposed architecture modifications when ac-
counting for compute cost, and the limits of
naively scaling model parameters with respect
to retrieval performance. While we find that
generative retrieval is competitive with state-
of-the-art dual encoders on small corpora, scal-
ing to millions of passages remains an impor-
tant and unsolved challenge. We believe these
findings will be valuable for the community to
clarify the current state of generative retrieval,
highlight the unique challenges, and inspire
new research directions.
1 Introduction
For the last several years, dual encoders (Gillick
et al., 2018; Karpukhin et al ., 2020; Ni et al ., 2022b;
Chen et al ., 2022) have dominated the landscape
for first-stage information retrieval. They model
relevance by mapping queries and documents into
the same embedding space, optimized via con-
trastive learning (Hadsell et al ., 2006; Gao et al .,
∗Equal Contribution.
†Work completed while a Student Researcher at Google.2021). Dense embeddings are pre-computed for
all documents in a corpus and stored in an external
index. This allows for fast approximate nearest
neighbor search (Vanderkam et al ., 2013; John-
son et al ., 2021) to retrieve relevant documents.
Cross-encoders based on large Transformer mod-
els (Nogueira et al ., 2019b, 2020; Pradeep et al .,
2021b) often function on top of these retrieved doc-
uments to further refine the top results.
Recently, the emerging paradigm of genera-
tive retrieval (De Cao et al ., 2020; Tay et al .,
2022) sought to replace this entire process
with a single sequence-to-sequence Transformer
model (Sutskever et al ., 2014; Vaswani et al ., 2017),
showing promising results against dual encoders
given a sufficiently small corpus size. Since then,
various techniques, such as (Zhuang et al ., 2022b;
Bevilacqua et al ., 2022; Zhou et al ., 2022; Wang
et al ., 2022; Chen et al ., 2023), have aimed to
improve the effectiveness of generative retrieval
models, either with alternative document identifier
formulations, architecture changes, or training ob-
jectives. Such work, however, has only evaluated
generative retrieval over relatively small corpora on
the order of 100k documents, such as Natural Ques-
tions (Kwiatkowski et al ., 2019), TriviaQA (Joshi
et al., 2017), or small subsets of the MS MARCO
document ranking task (Nguyen et al ., 2016). De-
spite these research contributions, a number of
open questions remain unanswered, including how
well current generative retrieval techniques work
on larger corpora and which aspects of generative
retrieval models proposed so far are vital at scale.
In this paper, we conduct the first empirical
study of generative retrieval techniques over the
entire MS MARCO passage-level corpus, evaluat-
ing its effectiveness over 8.8M passages. We select
popular approaches in recent works and evaluate
them first on Natural Questions and TriviaQA to
establish a definitive ablation of techniques in a
controlled setup. Our experiments mainly focus
1arXiv:2305.11841v1 [cs.IR] 19 May 2023 |
10.1016.j.bpj.2017.10.028.pdf | Article
Coevolutionary Landscape of Kinase Family
Proteins: Sequence Probabilities and FunctionalMotifs
Allan Haldane,1William F. Flynn,1,2Peng He,1and Ronald M. Levy1,*
1Center for Biophysics and Computational Biology, Department of Chemistry, and Institute for Computational Molecular Science, Temple
University, Philadelphia, Pennsylvania and2Department of Physics and Astronomy, Rutgers, The State University of New Jersey, Piscataway,
New Jersey
ABSTRACT The protein kinase catalytic domain is one of the most abundant domains across all branches of life. Although
kinases share a common core function of phosphoryl-transfer, they also have wide functional diversity and play varied rolesin cell signaling networks, and for this reason are implicated in a number of human diseases. This functional diversity is primarily
achieved through sequence variation, and uncovering the sequence-function relationships for the kinase family is a major chal-
lenge. In this study we use a statistical inference technique inspired by statistical physics, which builds a coevolutionary ‘‘Potts’’Hamiltonian model of sequence variation in a protein family. We show how this model has sufficient power to predict the prob-
ability of specific subsequences in the highly diverged kinase family, which we verify by comparing the model’s predictions with
experimental observations in the Uniprot database. We show that the pairwise (residue-residue) interaction terms of the statis-tical model are necessary and sufficient to capture higher-than-pairwise mutation patterns of natural kinase sequences. Weobserve that previously identified functional sets of residues have much stronger correlated interaction scores than are typical.
INTRODUCTION
About 2% of the human genome belongs to the protein
kinase family and over 105different kinases have been
sequenced from many species ( 1). Protein kinases’ common
catalytic role in protein phosphorylation is carried out by aconserved catalytic structural motif, but individual kinasesare specialized to phosphorylate particular substrates andare bound by different regulatory partners as part of cell
signaling networks. Kinases are implicated in many human
diseases, and understanding how a particular kinase’ssequence determines its individual function has clinicalapplications. The ability to predict the sequence-dependenteffect of specific mutations is relevant for the treatment ofkinase-related cancers ( 2), and understanding the differ-
ences in functionality between kinases can aid in selectivedrug design ( 3).
One approach to understanding the effects of particular
kinase sequence variations has been by structural analysis,based on thousands of observed kinase crystal structuresand comparison of their sequences. Patterns of structuralvariation and conservation within and between protein
kinase subfamilies has led to the identification of variousfunctional motifs such as the HRD and DFG motifs neces-sary for catalysis, networks of stabilizing interactionsformed in the kinase active catalytic state known as theC-spine and R-spine, and the importance of the C and Fhelices in acting as rigid foundations on which the catalytic
core rests ( 4–10 ). Two conformational states, the catalyti-
cally active ‘‘DFG-in’’ and the inactive ‘‘DFG-out’’ stateshave been discovered to be important in controlling kinaseactivation and regulation ( 11). An important goal of these
studies is to understand the sequence-dependent ligand-binding properties of different kinases for therapeuticpurposes; however, ligand binding affinities are still difficultto predict ( 12–15 ), and crystal structures only give a partial
view of kinase function.
Another way to extract information about function from
kinase sequence variation is to construct a statistical (Potts)model from a multiple sequence alignment (MSA) ofsequences collected from many organisms. The idea of us-ing sequence statistics to understand protein structure andfunction has been motivated and justified by the observationthat strongly covarying positions in an MSA correspond
well to contacts in structure, a fact used for protein contact
Submitted March 20, 2017, and accepted for publication October 17, 2017.
*Correspondence: ronlevy@temple.edu
Editor: Nathan Baker.
Biophysical Journal 114, 21–31, January 9, 2018 21https://doi.org/10.1016/j.bpj.2017.10.028
/C2112017 Biophysical Society. |
2024.04.22.590591v1.full.pdf | Design of highly functional genome editors by
modeling the universe of CRISPR-Cas sequences
Jeffrey A. Ruffolo1,*, Stephen Nayfach1,*, Joseph Gallagher1,*, Aadyot Bhatnagar1,*, Joel Beazer1, Riffat Hussain1, Jordan
Russ1, Jennifer Yip1, Emily Hill1, Martin Pacesa1,2, Alexander J. Meeske1,3, Peter Cameron1, and Ali Madani1,†
1Profluent Bio, Berkeley, CA, USA;2Laboratory of Protein Design and Immunoengineering, École Polytechnique Fédérale de Lausanne and Swiss Institute of Bioinformatics,
Lausanne, Switzerland;3Department of Microbiology, University of Washington, Seattle, WA, USA
Gene editing has the potential to solve fundamental challenges in agriculture, biotechnology, and human health. CRISPR-based gene editors
derived from microbes, while powerful, often show significant functional tradeoffs when ported into non-native environments, such as human
cells. Artificial intelligence (AI) enabled design provides a powerful alternative with potential to bypass evolutionary constraints and generate
editors with optimal properties. Here, using large language models (LLMs) trained on biological diversity at scale, we demonstrate the first
successful precision editing of the human genome with a programmable gene editor designed with AI. To achieve this goal, we curated a
dataset of over one million CRISPR operons through systematic mining of 26 terabases of assembled genomes and meta-genomes. We
demonstrate the capacity of our models by generating 4.8x the number of protein clusters across CRISPR-Cas families found in nature and
tailoring single-guide RNA sequences for Cas9-like effector proteins. Several of the generated gene editors show comparable or improved
activity and specificity relative to SpCas9, the prototypical gene editing effector, while being 400 mutations away in sequence. Finally, we
demonstrate an AI-generated gene editor, denoted as OpenCRISPR-1, exhibits compatibility with base editing. We release OpenCRISPR-1
publicly to facilitate broad, ethical usage across research and commercial applications.
genome editing |large language models |protein design
Introduction
Genome editing technologies, including those derived from prokaryotic CRISPR-Cas systems, have rev-
olutionized life science research and are poised to transform medicine and agriculture. Single-protein
CRISPR-Cas effectors, including the widely-adopted Cas9 nuclease from Streptococcus pyogenes (SpCas9),
have been utilized in biotechnology owing to their simplicity, robustness, and compact form. In order to
diversify the CRISPR toolbox and expand editing capabilities, new systems have been mined across diverse
microbial and viral genomes. While these novel systems have been sought for specific properties, such
as small size or extended protein stability in biofluids ( 1,2), they typically exhibit trade-offs in critical
attributes such as basal activity in target cells, PAM selectivity, thermal optima, or in vitro biochemical
properties, ultimately limiting their reach (2–5).
Repurposed CRISPR systems have been optimized for biotechnology using a range of protein engineering
approaches, including directed evolution and structure-guided mutagenesis. Directed evolution of Cas
proteins has proven extremely powerful yet can be limited by the rugged and non-convex nature of the
fitness landscapes ( 6–9), along with the difficulty of implementing selection-based screening in human cells.
Structure-guided rational mutagenesis offers an alternative or synergistic approach that has proven successful
for improving Cas9 basal activity and specificity in human cells ( 10–13). Similar results may be achievable
with structure-conditioned protein sequence design models ( 14,15), which learn the mapping from structure
to sequence from data. However, both of these approaches are dependent on explicit structural hypotheses,
either in the form of mechanistic understanding for rational mutagenesis or solved structures representing
key functional states for computational design, which are difficult to obtain for functions more complex
than simple binding interactions.
Protein language models eschew explicit structural hypotheses and instead learn the co-evolutionary
*These authors contributed equally to this work
†To whom correspondence should be addressed. E-mail: ali@profluent.bio
April 21, 2024 |1–S17. CC-BY-NC-ND 4.0 International license available under awas not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 22, 2024. ; https://doi.org/10.1101/2024.04.22.590591doi: bioRxiv preprint |
2208.11970.pdf | Understanding Diffusion Models: A Unified Perspective
Calvin Luo
Google Research, Brain Team
calvinluo@google.com
August 26, 2022
Contents
Introduction: Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Background: ELBO, VAE, and Hierarchical VAE . . . . . . . . . . . . . . . . . . . . . . . . 2
Evidence Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Variational Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Hierarchical Variational Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Variational Diffusion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Learning Diffusion Noise Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Three Equivalent Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Score-based Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Classifier Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Classifier-Free Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Introduction: Generative Models
Given observed samples xfrom a distribution of interest, the goal of a generative model is to learn to
modelits true data distribution p(x). Once learned, we can generate new samples from our approximate
model at will. Furthermore, under some formulations, we are able to use the learned model to evaluate the
likelihood of observed or sampled data as well.
Thereareseveralwell-knowndirectionsincurrentliterature, thatwewillonlyintroducebrieflyatahighlevel.
Generative Adversarial Networks (GANs) model the sampling procedure of a complex distribution, which
is learned in an adversarial manner. Another class of generative models, termed "likelihood-based", seeks
to learn a model that assigns a high likelihood to the observed data samples. This includes autoregressive
models, normalizing flows, and Variational Autoencoders (VAEs). Another similar approach is energy-based
modeling, in which a distribution is learned as an arbitrarily flexible energy function that is then normalized.
1arXiv:2208.11970v1 [cs.LG] 25 Aug 2022 |
2303.06296.pdf | STABILIZING TRANSFORMER TRAINING BY PREVENTING
ATTENTION ENTROPY COLLAPSE
A P REPRINT
Shuangfei Zhai∗, Tatiana Likhomanenko∗, Etai Littwin∗, Dan Busbridge∗, Jason Ramapuram∗,
Yizhe Zhang, Jiatao Gu, Josh Susskind
Apple
{szhai,antares,elittwin,dbusbridge,jramapuram,yizzhang,jgu32,jsusskind}@apple.com
March 14, 2023
ABSTRACT
Training stability is of great importance to Transformers. In this work, we investigate the training
dynamics of Transformers by examining the evolution of the attention layers. In particular, we track
the attention entropy for each attention head during the course of training, which is a proxy for
model sharpness. We identify a common pattern across different architectures and tasks, where low
attention entropy is accompanied by high training instability, which can take the form of oscillating
loss or divergence. We denote the pathologically low attention entropy, corresponding to highly
concentrated attention scores, as entropy collapse . As a remedy, we propose σReparam, a simple
and efficient solution where we reparametrize all linear layers with spectral normalization and an
additional learned scalar. We demonstrate that the proposed reparameterization successfully prevents
entropy collapse in the attention layers, promoting more stable training. Additionally, we prove a tight
lower bound of the attention entropy, which decreases exponentially fast with the spectral norm of
the attention logits, providing additional motivation for our approach. We conduct experiments with
σReparam on image classification, image self-supervised learning, machine translation, automatic
speech recognition, and language modeling tasks, across Transformer architectures. We show that
σReparam provides stability and robustness with respect to the choice of hyperparameters, going so
far as enabling training (a) a Vision Transformer to competitive performance without warmup, weight
decay, layer normalization or adaptive optimizers; (b) deep architectures in machine translation and
(c) speech recognition to competitive performance without warmup and adaptive optimizers.
Keywords Transformer, SSL, vision, NLP, MT, ASR, stability, attention
1 Introduction 3
2 Related Works 4
3 Method 5
3.1 Attention Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2σReparam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Experiments 6
4.1 Supervised Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
∗Equal contributionarXiv:2303.06296v1 [cs.LG] 11 Mar 2023 |
2005.12320.pdf | SCAN: Learning to Classify Images without
Labels
Wouter Van Gansbeke1⋆Simon Vandenhende1⋆Stamatios Georgoulis2
Marc Proesmans1Luc Van Gool1,2
1KU Leuven/ESAT-PSI2ETH Zurich/CVL, TRACE
Abstract. Can we automatically group images into semantically mean-
ingful clusters when ground-truth annotations are absent? The task of
unsupervised image classification remains an important, and open chal-
lenge in computer vision. Several recent approaches have tried to tackle
this problem in an end-to-end fashion. In this paper, we deviate from
recent works, and advocate a two-step approach where feature learning
and clustering are decoupled. First, a self-supervised task from represen-
tation learning is employed to obtain semantically meaningful features.
Second, we use the obtained features as a prior in a learnable cluster-
ing approach. In doing so, we remove the ability for cluster learning
to depend on low-level features, which is present in current end-to-end
learning approaches. Experimental evaluation shows that we outperform
state-of-the-art methods by large margins, in particular +26 .6% on CI-
FAR10, +25 .0% on CIFAR100-20 and +21 .3% on STL10 in terms of clas-
sification accuracy. Furthermore, our method is the first to perform well
on a large-scale dataset for image classification. In particular, we obtain
promising results on ImageNet, and outperform several semi-supervised
learning methods in the low-data regime without the use of any ground-
truth annotations. The code is made publicly available here .
Keywords: Unsupervised Learning, Self-Supervised Learning, Image
Classification, Clustering.
1 Introduction and prior work
Image classification is the task of assigning a semantic label from a predefined set
of classes to an image. For example, an image depicts a cat, a dog, a car, an air-
plane, etc., or abstracting further an animal, a machine, etc. Nowadays, this task
is typically tackled by training convolutional neural networks [28,44,19,53,47] on
large-scale datasets [11,30] that contain annotated images, i.e. images with their
corresponding semantic label. Under this supervised setup, the networks excel
at learning discriminative feature representations that can subsequently be clus-
tered into the predetermined classes. What happens, however, when there is no
access to ground-truth semantic labels at training time? Or going further, the
semantic classes, or even their total number, are not a priori known? The desired
⋆Authors contributed equallyarXiv:2005.12320v2 [cs.CV] 3 Jul 2020 |
2212.05339.pdf | Elixir: Train a Large Language Model on a Small
GPU Cluster
Haichen Huang
HPC-AI Technology Inc.
hhc@hpcaitech.comJiarui Fang∗
HPC-AI Technology Inc.
fangjr@hpcaitech.comHongxin Liu
HPC-AI Technology Inc.
liuhongxin@hpcaitech.com
Shenggui Li
HPC-AI Technology Inc.
lisg@hpcaitech.comYang You†
National University of Singapore
youy@comp.nus.edu.sg
Abstract
In recent years, large language models have achieved great success due to their
unprecedented size. However, training these models poses a challenge for most
researchers as it requires a substantial number of GPUs. To reduce GPU memory
usage, memory partitioning and memory offloading have been proposed. These
approaches eliminate memory redundancies and offload memory usage to the CPU
and NVMe memory, respectively, enabling training on small GPU clusters. How-
ever, directly deploying these solutions often leads to suboptimal efficiency. Only
experienced experts can unleash the full potential of hardware by carefully tuning
the distributed configuration. Thus, we present a novel solution, Elixir, which auto-
mates efficient large model training based on pre-runtime model profiling. Elixir
aims to identify the optimal combination of partitioning and offloading techniques
to maximize training throughput. In our experiments, Elixir significantly outper-
forms the current state-of-the-art baseline. Our optimal configuration achieves up
to a 3.4 ×speedup on GPT-2 models compared with SOTA solutions. We hope
that our work will benefit individuals who lack computing resources and expertise,
granting them access to large models1.
1 Introduction
The current success of deep learning (DL) is attributed to the rise of pre-trained large language
models (LLMs) [ 1,2,3,4,5,6,7,8]. LLMs are widely used not only in NLP applications such
as conversation, Q&A, and text generation, but also in multimodal tasks such as image generation
[9,10], and speech synthesis [ 11]. However, training LLMs remains challenging due to the growing
size of models and limited GPU memory. In the past five years, the largest dense models have
significantly increased in size, from 340 million parameters in BERT [ 1] to 540 billion parameters in
PaLM [ 7]. Exerting the power of the mixture-of-experts architecture [ 12], the number of parameters
in the largest sparse model [ 13] has exceeded 1 trillion. Meanwhile, GPU memory has only increased
to 80GB [14, 15].
To address the memory bottleneck, researchers have proposed distributed training techniques. Dis-
tributed data parallelism (DDP) divides input data and assigns each device to compute its partition
∗This work was done when Jiarui worked at HPC-AI Technology Inc.
†Dr. You is a faculty member at NUS. This work was done at HPC-AI Technology Inc.
1The beta version of Elixir is now available at https://github.com/hpcaitech/ColossalAI/tree/
feature/elixir
Preprint. Under review.arXiv:2212.05339v3 [cs.DC] 31 May 2023 |
2310.12442.pdf | Efficient Long-Range Transformers: You Need to Attend More,
but Not Necessarily at Every Layer
Qingru Zhang†∗, Dhananjay Ram⋄, Cole Hawkins⋄, Sheng Zha⋄, Tuo Zhao†
†Georgia Institute of Technology⋄Amazon Web Service
{qingru.zhang,tourzhao}@gatech.edu
{radhna,colehawk,zhasheng}@amazon.com
Abstract
Pretrained transformer models have demon-
strated remarkable performance across vari-
ous natural language processing tasks. These
models leverage the attention mechanism to
capture long- and short-range dependencies in
the sequence. However, the (full) attention
mechanism incurs high computational cost –
quadratic in the sequence length, which is not
affordable in tasks with long sequences, e.g.,
inputs with 8k tokens. Although sparse at-
tention can be used to improve computational
efficiency, as suggested in existing work, it
has limited modeling capacity and often fails
to capture complicated dependencies in long
sequences. To tackle this challenge, we pro-
pose MASFormer, an easy-to-implement trans-
former variant with Mixed Attention Spans.
Specifically, MASFormer is equipped with full
attention to capture long-range dependencies,
but only at a small number of layers. For the
remaining layers, MASformer only employs
sparse attention to capture short-range depen-
dencies. Our experiments on natural language
modeling and generation tasks show that a
decoder-only MASFormer model of 1.3B pa-
rameters can achieve competitive performance
to vanilla transformers with full attention while
significantly reducing computational cost (up
to 75%). Additionally, we investigate the ef-
fectiveness of continual training with long se-
quence data and how sequence length impacts
downstream generation performance, which
may be of independent interest.
1 Introduction
Pre-trained transformer models have manifested
superior performance in various natural language
processing tasks such as natural language modeling
(NLM) (Dai et al., 2019; Radford et al., 2019), natu-
ral language generation (NLG) (Brown et al., 2020)
and natural language understanding (NLU) (De-
vlin et al., 2019; Liu et al., 2019; He et al., 2021b).
∗Work was done during Qingru Zhang’s internship at
Amazon Web Service.These models leverage the attention mechanism
(Vaswani et al., 2017) to compute the dependency
score for each pair of tokens in an input sequence.
Some practical tasks require these transformer
models to handle long-sequence inputs like 8k to-
kens. For example, chatbot systems gather long-
term contexts of user interactions to generate infor-
mative texts (Roller et al., 2021). Summarization
for news, government reports, and academic papers
request models to take inputs of long sequences to
generate comprehensive summaries (Shaham et al.,
2022), otherwise models often miss important in-
formation. Note that typical transformer models
apply full attention to capture token dependencies
pair-wise. It leads to a quadratic time and space
complexity w.r.t. input length. However, such a
complexity is prohibitive for long sequences. In
particular, it incurs massive memory consumption
during the back propagation. For example, a trans-
former model with 250M parameters consumes
over 80G GPU memory when sequence length is
8k (Zuo et al., 2022).
To address this scalability issue, various ap-
proaches have been proposed to reduce the com-
plexity. One approach is sparse attention , which
restricts each token to attend a subset of tokens
based on predefined sparsity patterns (Beltagy et al.,
2020; Zaheer et al., 2020; Ainslie et al., 2020). For
instance, block sparse attention (Kitaev et al., 2020;
Ma et al., 2023) divides the input sequence into sev-
eral blocks, and only intra-block attention is per-
formed. Besides, sliding-window attention (Belt-
agy et al., 2020; Zaheer et al., 2020; Ainslie et al.,
2020) allows each token to attend to its neighboring
tokens within a sliding window. These methods,
though reducing the complexity of full attention,
cannot sufficiently capture long-range dependen-
cies. Other variants, such as kernel approximation
(Peng et al., 2021) and low-rank approximation
(Wang et al., 2020; Chen et al., 2021) methods,
share the similar spirit and drawbacks. To com-arXiv:2310.12442v1 [cs.CL] 19 Oct 2023 |
1703.03400.pdf | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn1Pieter Abbeel1 2Sergey Levine1
Abstract
We propose an algorithm for meta-learning that
is model-agnostic, in the sense that it is com-
patible with any model trained with gradient de-
scent and applicable to a variety of different
learning problems, including classification, re-
gression, and reinforcement learning. The goal
of meta-learning is to train a model on a vari-
ety of learning tasks, such that it can solve new
learning tasks using only a small number of train-
ing samples. In our approach, the parameters of
the model are explicitly trained such that a small
number of gradient steps with a small amount
of training data from a new task will produce
good generalization performance on that task. In
effect, our method trains the model to be easy
to fine-tune. We demonstrate that this approach
leads to state-of-the-art performance on two few-
shot image classification benchmarks, produces
good results on few-shot regression, and acceler-
ates fine-tuning for policy gradient reinforcement
learning with neural network policies.
1. Introduction
Learning quickly is a hallmark of human intelligence,
whether it involves recognizing objects from a few exam-
ples or quickly learning new skills after just minutes of
experience. Our artificial agents should be able to do the
same, learning and adapting quickly from only a few exam-
ples, and continuing to adapt as more data becomes avail-
able. This kind of fast and flexible learning is challenging,
since the agent must integrate its prior experience with a
small amount of new information, while avoiding overfit-
ting to the new data. Furthermore, the form of prior ex-
perience and new data will depend on the task. As such,
for the greatest applicability, the mechanism for learning to
learn (or meta-learning) should be general to the task and
1University of California, Berkeley2OpenAI. Correspondence
to: Chelsea Finn <cbfinn@eecs.berkeley.edu >.
Proceedings of the 34thInternational Conference on Machine
Learning , Sydney, Australia, PMLR 70, 2017. Copyright 2017
by the author(s).the form of computation required to complete the task.
In this work, we propose a meta-learning algorithm that
is general and model-agnostic, in the sense that it can be
directly applied to any learning problem and model that
is trained with a gradient descent procedure. Our focus
is on deep neural network models, but we illustrate how
our approach can easily handle different architectures and
different problem settings, including classification, regres-
sion, and policy gradient reinforcement learning, with min-
imal modification. In meta-learning, the goal of the trained
model is to quickly learn a new task from a small amount
of new data, and the model is trained by the meta-learner
to be able to learn on a large number of different tasks.
The key idea underlying our method is to train the model’s
initial parameters such that the model has maximal perfor-
mance on a new task after the parameters have been up-
dated through one or more gradient steps computed with
a small amount of data from that new task. Unlike prior
meta-learning methods that learn an update function or
learning rule (Schmidhuber, 1987; Bengio et al., 1992;
Andrychowicz et al., 2016; Ravi & Larochelle, 2017), our
algorithm does not expand the number of learned param-
eters nor place constraints on the model architecture (e.g.
by requiring a recurrent model (Santoro et al., 2016) or a
Siamese network (Koch, 2015)), and it can be readily com-
bined with fully connected, convolutional, or recurrent neu-
ral networks. It can also be used with a variety of loss func-
tions, including differentiable supervised losses and non-
differentiable reinforcement learning objectives.
The process of training a model’s parameters such that a
few gradient steps, or even a single gradient step, can pro-
duce good results on a new task can be viewed from a fea-
ture learning standpoint as building an internal representa-
tion that is broadly suitable for many tasks. If the internal
representation is suitable to many tasks, simply fine-tuning
the parameters slightly (e.g. by primarily modifying the top
layer weights in a feedforward model) can produce good
results. In effect, our procedure optimizes for models that
are easy and fast to fine-tune, allowing the adaptation to
happen in the right space for fast learning. From a dynami-
cal systems standpoint, our learning process can be viewed
as maximizing the sensitivity of the loss functions of new
tasks with respect to the parameters: when the sensitivity
is high, small local changes to the parameters can lead toarXiv:1703.03400v3 [cs.LG] 18 Jul 2017 |
Evolutionary-Principles-in-Self-Referential-Learning.pdf | Evolutionary Principles inSelf—Referential Learning
(Diploma Thesis)
Jargen Schmidhube:
Technische Universitat Miinchen
May 14, 1987
|
2211.03540.pdf | Measuring Progress on Scalable Oversight
for Large Language Models
Samuel R. Bowman∗, Jeeyoon Hyun, Ethan Perez,
Edwin Chen,†Craig Pettit,†Scott Heiner,†Kamil ˙e Lukoši ¯ut˙e,‡
Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon,
Christopher Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson,
Jack Clark, Jackson Kernion, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau,
Kamal Ndousse, Liane Lovitt, Nelson Elhage, Nicholas Schiefer, Nicholas Joseph, Noemí Mercado,
Nova DasSarma, Robin Larson, Sam McCandlish, Sandipan Kundu, Scott Johnston,
Shauna Kravec, Sheer El Showk, Stanislav Fort, Timothy Telleen-Lawton, Tom Brown,
Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds,
Ben Mann, and Jared Kaplan∗
Anthropic,†Surge AI,‡Independent Researcher
Abstract
Developing safe and useful general-purpose AI systems will require us to make progress
onscalable oversight : the problem of supervising systems that potentially outperform us
on most skills relevant to the task at hand. Empirical work on this problem is not straight-
forward, since we do not yet have systems that broadly exceed our abilities. This paper
discusses one of the major ways we think about this problem, with a focus on ways it can
be studied empirically. We first present an experimental design centered on tasks for which
human specialists succeed but unaided humans andcurrent general AI systems fail. We
then present a proof-of-concept experiment meant to demonstrate a key feature of this ex-
perimental design and show its viability with two question-answering tasks: MMLU and
time-limited QuALITY . On these tasks, we find that human participants who interact with
an unreliable large-language-model dialog assistant through chat—a trivial baseline strat-
egy for scalable oversight—substantially outperform both the model alone andtheir own
unaided performance. These results are an encouraging sign that scalable oversight will
be tractable to study with present models and bolster recent findings that large language
models can productively assist humans with difficult tasks.
1 Introduction
To build and deploy powerful AI responsibly, we will need to develop robust techniques for scalable over-
sight : the ability to provide reliable supervision—in the form of labels, reward signals, or critiques—to
models in a way that will remain effective past the point that models start to achieve broadly human-level
performance (Amodei et al., 2016). These techniques are likely to build on the methods we use today for
steering large models (like RLHF; Christiano et al., 2017; Stiennon et al., 2020), but will need to be further
developed to continue behaving as expected in regimes where models have important knowledge or capa-
∗Correspondence to: {sambowman,jared}@anthropic.com
The first and third blocks of authors are core contributors. Author contributions are detailed in §6. All authors conducted
this work while at Anthropic except where noted.arXiv:2211.03540v2 [cs.HC] 11 Nov 2022 |
2310.05869.pdf | HyperAttention: Long-context Attention in Near-Linear Time
Insu Han
Yale University
insu.han@yale.eduRajesh Jayaram
Google Research
rkjayaram@google.comAmin Karbasi
Yale University, Google Research
amin.karbasi@yale.edu
Vahab Mirrokni
Google Research
mirrokni@google.comDavid P. Woodruff
CMU, Google Research
dwoodruf@cs.cmu.eduAmir Zandieh
Independent Researcher
amir.zed512@gmail.com
Abstract
We present an approximate attention mechanism named “HyperAttention” to address the
computational challenges posed by the growing complexity of long contexts used in Large Lan-
guage Models (LLMs). Recent work suggests that in the worst-case scenario, quadratic time is
necessary unless the entries of the attention matrix are bounded or the matrix has low stable
rank. We introduce two parameters which measure: (1) the max column norm in the normalized
attention matrix, and (2) the ratio of row norms in the unnormalized attention matrix after de-
tecting and removing large entries. We use these fine-grained parameters to capture the hardness
of the problem. Despite previous lower bounds, we are able to achieve a linear time sampling
algorithm even when the matrix has unbounded entries or a large stable rank, provided the above
parameters are small. HyperAttention features a modular design that easily accommodates in-
tegration of other fast low-level implementations, particularly FlashAttention. Empirically, em-
ploying Locality Sensitive Hashing (LSH) to identify large entries, HyperAttention outperforms
existing methods, giving significant speed improvements compared to state-of-the-art solutions
like FlashAttention. We validate the empirical performance of HyperAttention on a variety of
different long-context length datasets. For example, HyperAttention makes the inference time
of ChatGLM2 50% faster on 32k context length while perplexity increases from 5.6 to 6.3. On
larger context length, e.g., 131k, with causal masking, HyperAttention offers 5-fold speedup on
a single attention layer.
1 Introduction
Transformers [29] have been successfully applied to a wide variety of learning tasks in areas such
as natural language processing [13, 30, 3, 26], computer vision [4, 15], and time series forecast-
ing [34]. Despite their success, these models face serious scalability limitations because na¨ ıve exact
computation of their attention layers incurs quadratic (in the sequence length) runtime and mem-
ory complexities. This presents a fundamental challenge for scaling transformer models to longer
context lengths.
1Empirical studies are conducted by I. Han and A. Zandieh.
2Codes are available at https://github.com/insuhan/hyper-attn .
1arXiv:2310.05869v3 [cs.LG] 1 Dec 2023 |
2310.13548.pdf | TOWARDS UNDERSTANDING
SYCOPHANCY IN LANGUAGE MODELS
Mrinank Sharma∗, Meg Tong∗, Tomasz Korbak, David Duvenaud
Amanda Askell, Samuel R. Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds,
Scott R. Johnston, Shauna Kravec, Timothy Maxwell, Sam McCandlish, Kamal Ndousse,
Oliver Rausch, Nicholas Schiefer, Da Yan, Miranda Zhang ,
Ethan Perez
ABSTRACT
Reinforcement learning from human feedback (RLHF) is a popular technique for
training high-quality AI assistants. However, RLHF may also encourage model re-
sponses that match user beliefs over truthful responses, a behavior known as syco-
phancy. We investigate the prevalence of sycophancy in RLHF-trained models
and whether human preference judgments are responsible. We first demonstrate
that five state-of-the-art AI assistants consistently exhibit sycophancy behavior
across four varied free-form text-generation tasks. To understand if human prefer-
ences drive this broadly observed behavior of RLHF models, we analyze existing
human preference data. We find that when a response matches a user’s views,
it is more likely to be preferred. Moreover, both humans and preference mod-
els (PMs) prefer convincingly-written sycophantic responses over correct ones a
non-negligible fraction of the time. Optimizing model outputs against PMs also
sometimes sacrifices truthfulness in favor of sycophancy. Overall, our results in-
dicate that sycophancy is a general behavior of RLHF models, likely driven in part
by human preference judgments favoring sycophantic responses.
1 I NTRODUCTION
AI assistants such as GPT-4 (OpenAI, 2023) are typically trained to produce outputs that humans
rate highly, e.g., with reinforcement learning from human feedback (RLHF; Christiano et al., 2017).
Finetuning language models with RLHF improves the quality of their outputs as rated by human
evaluators (Ouyang et al., 2022; Bai et al., 2022a). However, some have hypothesized that training
schemes based on human preference judgments are liable to exploit human judgments in undesirable
ways, e.g., encouraging AI systems to produce outputs that appeal to human evaluators but are
actually flawed or incorrect (Cotra, 2021). In parallel, recent work has shown that RLHF-trained
models tend to provide answers that are in line with the user they are responding to, in proof-of-
concept evaluations where users state themselves as having a certain view (Perez et al., 2022; Wei
et al., 2023b; Turpin et al., 2023). However, it is unclear whether such failures occur in more varied
and realistic settings with production models, as well as whether such failures are indeed driven by
flaws in human preferences, as Cotra (2021) and Perez et al. (2022) hypothesize.
We therefore first investigate whether state-of-the-art AI assistants provide sycophantic model re-
sponses in a wide variety of realistic settings (§3). We identify consistent patterns of sycophancy
across five state-of-the-art RLHF-trained AI assistants in free-form text-generation tasks. Specifi-
cally, we demonstrate that these AI assistants frequently wrongly admit mistakes when questioned
by the user, give predictably biased feedback, and mimic errors made by the user. The consistency
of these empirical findings suggests sycophancy may indeed be a property of the way RLHF models
are trained, rather than an idiosyncratic detail of a particular system.
*Equal contribution. All authors are at Anthropic. Mrinank Sharma is also at the University of Oxford.
Meg Tong conducted this work as an independent researcher. Tomasz Korbak conducted this work while at
the University of Sussex and FAR AI. First and last author blocks are core contributors. Correspondence to
{mrinank,meg,ethan}@anthropic.com
1arXiv:2310.13548v1 [cs.CL] 20 Oct 2023 |
2005.11401.pdf | Retrieval-Augmented Generation for
Knowledge-Intensive NLP Tasks
Patrick Lewis†‡, Ethan Perez⋆,
Aleksandra Piktus†, Fabio Petroni†, Vladimir Karpukhin†, Naman Goyal†, Heinrich Küttler†,
Mike Lewis†, Wen-tau Yih†, Tim Rocktäschel†‡, Sebastian Riedel†‡, Douwe Kiela†
†Facebook AI Research;‡University College London;⋆New York University;
plewis@fb.com
Abstract
Large pre-trained language models have been shown to store factual knowledge
in their parameters, and achieve state-of-the-art results when fine-tuned on down-
stream NLP tasks. However, their ability to access and precisely manipulate knowl-
edge is still limited, and hence on knowledge-intensive tasks, their performance
lags behind task-specific architectures. Additionally, providing provenance for their
decisions and updating their world knowledge remain open research problems. Pre-
trained models with a differentiable access mechanism to explicit non-parametric
memory have so far been only investigated for extractive downstream tasks. We
explore a general-purpose fine-tuning recipe for retrieval-augmented generation
(RAG) — models which combine pre-trained parametric and non-parametric mem-
ory for language generation. We introduce RAG models where the parametric
memory is a pre-trained seq2seq model and the non-parametric memory is a dense
vector index of Wikipedia, accessed with a pre-trained neural retriever. We com-
pare two RAG formulations, one which conditions on the same retrieved passages
across the whole generated sequence, and another which can use different passages
per token. We fine-tune and evaluate our models on a wide range of knowledge-
intensive NLP tasks and set the state of the art on three open domain QA tasks,
outperforming parametric seq2seq models and task-specific retrieve-and-extract
architectures. For language generation tasks, we find that RAG models generate
more specific, diverse and factual language than a state-of-the-art parametric-only
seq2seq baseline.
1 Introduction
Pre-trained neural language models have been shown to learn a substantial amount of in-depth knowl-
edge from data [ 47]. They can do so without any access to an external memory, as a parameterized
implicit knowledge base [ 51,52]. While this development is exciting, such models do have down-
sides: They cannot easily expand or revise their memory, can’t straightforwardly provide insight into
their predictions, and may produce “hallucinations” [ 38]. Hybrid models that combine parametric
memory with non-parametric (i.e., retrieval-based) memories [ 20,26,48] can address some of these
issues because knowledge can be directly revised and expanded, and accessed knowledge can be
inspected and interpreted. REALM [ 20] and ORQA [ 31], two recently introduced models that
combine masked language models [ 8] with a differentiable retriever, have shown promising results,arXiv:2005.11401v4 [cs.CL] 12 Apr 2021 |
2304.07313.pdf | M2T: Masking Transformers Twice for Faster Decoding
Fabian Mentzer
Google Research
mentzer@google.comEirikur Agustsson
Google Research
eirikur@google.comMichael Tschannen
Google Research
tschannen@google.com
Abstract
We show how bidirectional transformers trained for
masked token prediction can be applied to neural image
compression to achieve state-of-the-art results. Such mod-
els were previously used for image generation by pro-
gressivly sampling groups of masked tokens according to
uncertainty-adaptive schedules. Unlike these works, we
demonstrate that predefined, deterministic schedules per-
form as well or better for image compression. This insight
allows us to use masked attention during training in ad-
dition to masked inputs, and activation caching during in-
ference, to significantly speed up our models ( ≈4×higher
inference speed) at a small increase in bitrate.
1. Introduction
Recently, transformers trained for masked token predic-
tion have successfully been applied to neural image and
video generation [11, 35]. In MaskGIT [11], the authors
use a VQ-GAN [16] to map images to vector-quantized to-
kens, and learn a transformer to predict the distribution of
these tokens. The key novelty of the approach was to use
BERT-like [13] random masks during training to then pre-
dict tokens in groups during inference, sampling tokens in
the same group in parallel at each inference step. Thereby,
each inference step is conditioned on the tokens generated
in previous steps. A big advantage of BERT-like training
with grouped inference versus prior state-of-the-art is that
considerably fewer steps are required to produce realistic
images (typically 10-20, rather than one per token).
These models are optimized to minimize the cross en-
tropy between the token distribution pmodeled by the trans-
former and the true (unknown) token distribution q, as mea-
sured via negative log likelihood (NLL). As is known from
information theory, this is equivalent to the bit cost required
to (losslessly) store a sample drawn from qwith a model
p[39]. Indeed, any model pthat predicts an explicit joint
distribution over tokens in a deterministic way can be turned
into a compression model by using pto entropy code the to-
kens, rather than sampling them.
0.1 0.2 0.3 0.4 0.5 bpp2628303234
PSNR on Kodak
Ours (MT)
Ours (M2T)
ELIC (He '22)
ContextFormer
(Koyuncu '22)
Devil-Details (Zou '22)
VTM
Entroformer (Qian '22)
Cheng '20
CHARM (Minnen '20)
Checkerboard (He '21)
MIM (El-Nouby '22)
BPG
JPEGFigure 1: Rate distortion results on Kodak. Our MTout-
performs the prior state-of-the-art ELIC [18]; M2T only in-
curs a small reduction in rate-distortion performance com-
pared to MT while running about 4×faster on hardware
(see Fig. 4)
Motivated by this, we aim to employ masked trans-
formers for neural image compression. Previous work has
used masked and unmasked transformers in the entropy
model for video compression [37, 25] and image compres-
sion [29, 22, 15]. However, these models are often ei-
ther prohibitively slow [22], or lag in rate-distortion per-
formance [29, 15]. In this paper, we show a conceptually
simple transformer-based approach that is state-of-the-art in
neural image compression, at practical runtimes. The model
is using off-the-shelf transformers, and does not rely on
special positional encodings or multi-scale factorizations,
in contrast to previous work. Additionally, we propose
a new variant combining ideas from MaskGIT-like input-
masked transformers and fully autoregressive attention-
masked transformers. The resulting model masks both the
input and attention layers, and allows us to substantially im-
prove runtimes at a small cost in rate-distortion.
To train masked transformers, the tokens to be masked
in each training step are usually selected uniformly at ran-
dom. During inference, the models are first applied to maskarXiv:2304.07313v1 [eess.IV] 14 Apr 2023 |
2303.15343v4.pdf | Sigmoid Loss for Language Image Pre-Training
Xiaohua Zhai⋆Basil Mustafa Alexander Kolesnikov Lucas Beyer⋆
Google DeepMind, Z ¨urich, Switzerland
{xzhai, basilm, akolesnikov, lbeyer }@google.com
Abstract
We propose a simple pairwise Sigmoid loss for
Language-Image Pre-training (SigLIP). Unlike standard
contrastive learning with softmax normalization, the sig-
moid loss operates solely on image-text pairs and does not
require a global view of the pairwise similarities for nor-
malization. The sigmoid loss simultaneously allows fur-
ther scaling up the batch size, while also performing bet-
ter at smaller batch sizes. Combined with Locked-image
Tuning, with only four TPUv4 chips, we train a SigLiT
model that achieves 84.5% ImageNet zero-shot accuracy
in two days. The disentanglement of the batch size from
the loss further allows us to study the impact of exam-
ples vs pairs and negative to positive ratio. Finally, we
push the batch size to the extreme, up to one million, and
find that the benefits of growing batch size quickly dimin-
ish, with a more reasonable batch size of 32 k being suf-
ficient. We release our models at https://github.
com/google-research/big_vision and hope our
research motivates further explorations in improving the
quality and efficiency of language-image pre-training.
1. Introduction
Contrastive pre-training using weak supervision from
image-text pairs found on the web is becoming the go-to
method for obtaining generic computer vision backbones,
slowly replacing pre-training on large labelled multi-class
datasets. The high-level idea is to simultaneously learn
an aligned representation space for images and texts using
paired data. Seminal works CLIP [36] and ALIGN [23] es-
tablished the viability of this approach at a large scale, and
following their success, many large image-text datasets be-
came available privately [59, 13, 21, 49] and publicly [40,
6, 15, 7, 41].
The standard recipe to pre-train such models leverages
the image-text contrastive objective. It aligns the image and
⋆equal contributionTable 1: SigLiT and SigLIP results . Sigmoid loss is mem-
ory efficient, allows larger batch sizes (BS) that unlocks
language image pre-training with a small number of chips.
SigLiT model with a frozen public
B/8 checkpoint [42],
trained on the LiT image-text dataset [59] using four TPU-
v4 chips for one day, achieves 79.7% 0-shot accuracy on
ImageNet. The same setup with a g/14 checkpoint [58]
leads to 84.5% accuracy, trained for two days. With a pub-
lic unlocked
B/16 image checkpoint [42], trained on the
WebLI dataset [13], SigLIP achieves 71.0% 0-shot accu-
racy using 16 TPU-v4 chips for three days. The last two
rows show results with randomly initialized models.
Image Text BS #TPUv4 Days INet-0
SigLiT
B/8 L∗32 k 4 1 79.8
SigLiT
g/14 L 20 k 4 2 84.5
SigLIP
B/16 B 16 k 16 3 71.0
SigLIP B/16 B 32 k 32 2 72.1
SigLIP B/16 B 32 k 32 5 73.4
∗We use a variant of the L model with 12 layers.
text embeddings for matching (positive) image-text pairs
while making sure that unrelated (negative) image-text pairs
are dissimilar in the embedding space. This is achieved via a
batch-level softmax-based contrastive loss, applied twice to
normalize the pairwise similarity scores across all images,
then all texts. A naive implementation of the softmax is
numerically unstable; it is usually stabilized by subtracting
the maximum input value before applying the softmax [18],
which requires another pass over the full batch.
In this paper, we propose a simpler alternative: the sig-
moid loss. It does not require any operation across the full
batch and hence greatly simplifies the distributed loss im-
plementation and boosts efficiency. Additionally, it con-
ceptually decouples the batch size from the definition of
the task. We compare the proposed sigmoid loss with the
standard softmax loss across multiple setups. In partic-
ular, we investigate sigmoid-based loss with two promi-
1 |
2404.11018v1.pdf | 2024-4-18
Many-Shot In-Context Learning
Rishabh Agarwal*, Avi Singh*, Lei M. Zhang†, Bernd Bohnet†, Stephanie Chan†, Ankesh Anand , Zaheer
Abbas , Azade Nova , John D. Co-Reyes , Eric Chu , Feryal Behbahani , Aleksandra Faust and Hugo Larochelle
*Contributed equally,†Core contribution
Largelanguagemodels(LLMs)excelatfew-shotin-contextlearning(ICL)–learningfromafewexamples
provided in context at inference, without any weight updates. Newly expanded context windows allow
us to investigate ICL with hundreds or thousands of examples – the many-shot regime. Going from
few-shot to many-shot, we observe significant performance gains across a wide variety of generative
and discriminative tasks. While promising, many-shot ICL can be bottlenecked by the available amount
of human-generated outputs. To mitigate this limitation, we explore two new settings: “Reinforced ICL”
and “Unsupervised ICL”. Reinforced ICL uses model-generated chain-of-thought rationales in place of
human rationales. Unsupervised ICL removes rationales from the prompt altogether, and prompts the
model only with domain-specific inputs. We find that both Reinforced and Unsupervised ICL can be quite
effective in the many-shot regime, particularly on complex reasoning tasks. Finally, we demonstrate
that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases and can
learn high-dimensional functions with numerical inputs. Our analysis also reveals the limitations of
next-token prediction loss as an indicator of downstream performance.
1. Introduction
SummarizationPlanning
(Logistics)Sequential Parity (20 digits)GPQA
Translation
Problem-solving (MATH)Classification (64 dim)Code Verifier Big-Bench
Hard (8 tasks)GSM8K
(Transfer)Sentiment
Analysis (FP)020406080100T ask Performance in %
(Gemini 1.5 Pro)
5-shot
1-shot
16-shot
5-shot
10-shot
4-shot
4-shot
4-shot
3-shot
4-shot
32-shot500-shot
800-shot
8192-shot
125-shot
997-shot
125-shot
512-shot
128-shot
50-shot
500-shot
2048-shot+5.0+15.0+36.4+9.2+4.2+7.9+21.0+5.0 +10.9+7.9+18.2Few-Shot ICL Many-Shot ICL
Figure 1|Many-shot vs Few-Shot In-Context Learning (ICL) across several tasks. Many-shot learning exhibits consistent
performance gains over few-shot ICL. This gain is especially dramatic for difficult non-natural language tasks like sequential
parity prediction and linear classification. Number of best-performing shots for many-shot ICL are shown inside the bar for
each task. For few-shot ICL, we either use typical number of shots used on a benchmark, for example, 4-shot for MATH, or
the longest prompt among the ones we tested with less than the GPT-3 context length of 2048 tokens. Reasoning-oriented
tasks, namely MATH, GSM8K, BBH, and GPQA uses human-generated chain-of-thought rationales. For translation, we report
performance FLORES-MT result on English to Kurdish, summarization uses XLSum, MATH corresponds to the MATH500
test set, and sentiment analysis results are reported with semantically-unrelated labels. See §3, §4, and §5 for more details.
Large language models (LLMs) have demonstrated a remarkable ability to perform in-context
learning (ICL): they can learn a new task just from input-output examples, also known as shots, which
precede a test input presented within the LLM context. However, an LLM’s context window, i.e. the
Corresponding author(s): rishabhagarwal@google.com, singhavi@google.com
©2024 Google DeepMind. All rights reservedarXiv:2404.11018v1 [cs.LG] 17 Apr 2024 |
2305.09836.pdf | Revisiting the Minimalist Approach to Offline
Reinforcement Learning
Denis Tarasov Vladislav Kurenkov Alexander Nikulin Sergey Kolesnikov
Tinkoff
{den.tarasov, v.kurenkov, a.p.nikulin, s.s.kolesnikov}@tinkoff.ai
Abstract
Recent years have witnessed significant advancements in offline reinforcement
learning (RL), resulting in the development of numerous algorithms with varying de-
grees of complexity. While these algorithms have led to noteworthy improvements,
many incorporate seemingly minor design choices that impact their effectiveness
beyond core algorithmic advances. However, the effect of these design choices
on established baselines remains understudied. In this work, we aim to bridge
this gap by conducting a retrospective analysis of recent works in offline RL and
propose ReBRAC, a minimalistic algorithm that integrates such design elements
built on top of the TD3+BC method. We evaluate ReBRAC on 51 datasets with
both proprioceptive and visual state spaces using D4RL and V-D4RL benchmarks,
demonstrating its state-of-the-art performance among ensemble-free methods in
both offline and offline-to-online settings. To further illustrate the efficacy of
these design choices, we perform a large-scale ablation study and hyperparameter
sensitivity analysis on the scale of thousands of experiments.1
1 Introduction
Interest of the reinforcement learning (RL) community in the offline setting has led to a myriad
of new algorithms specifically tailored to learning highly performant policies without the ability
to interact with an environment (Levine et al., 2020; Prudencio et al., 2022). Yet, similar to the
advances in online RL (Engstrom et al., 2020; Henderson et al., 2018), many of those algorithms
come with an added complexity – design and implementation choices beyond core algorithmic
innovations, requiring a delicate effort in reproduction, hyperparameter tuning, and causal attribution
of performance gains.
Indeed, the issue of complexity was already raised in the offline RL community by Fujimoto
& Gu (2021); the authors highlighted veiled design and implementation-level adjustments (e.g.,
different architectures or actor pre-training) and then demonstrated how a simple behavioral cloning
regularization added to the TD3 (Fujimoto et al., 2018) constitutes a strong baseline in the offline
setting. This minimalistic and uncluttered algorithm, TD3+BC, has become a de-facto standard
baseline to be compared against. Indeed, most new algorithms juxtapose against it and claim
significant gains over (Akimov et al., 2022; An et al., 2021; Nikulin et al., 2023; Wu et al., 2022;
Chen et al., 2022b; Ghasemipour et al., 2022). However, application of newly emerged design and
implementation choices to this baseline is still missing.
In this work, we build upon Fujimoto & Gu (2021) line of research and ask: what is the extent to which
newly emerged minor design choices can advance the minimalistic offline RL algorithm? The answer
is illustrated in Figure 1: we propose an extension to TD3+BC, ReBRAC (Section 3), that simply
adds on recently appeared design decisions upon it. We test our algorithm on both proprioceptive and
1Our implementation is available at https://github.com/DT6A/ReBRAC
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2305.09836v2 [cs.LG] 24 Oct 2023 |
2002.05202.pdf | arXiv:2002.05202v1 [cs.LG] 12 Feb 2020GLU Variants Improve Transformer
Noam Shazeer
Google
noam@google.com
February 14, 2020
Abstract
Gated Linear Units [ Dauphin et al. ,2016] consist of the component-wise product of two linear pro-
jections, one of which is first passed through a sigmoid funct ion. Variations on GLU are possible, using
different nonlinear (or even linear) functions in place of si gmoid. We test these variants in the feed-
forward sublayers of the Transformer [ Vaswani et al. ,2017] sequence-to-sequence model, and find that
some of them yield quality improvements over the typically- used ReLU or GELU activations.
1 Introduction
The Transformer [ Vaswani et al. ,2017] sequence-to-sequence model alternates between multi-he ad attention,
and what it calls "position-wise feed-forward networks" (F FN). The FFN takes a vector x(the hidden repre-
sentation at a particular position in the sequence) and pass es it through two learned linear transformations,
(represented by the matrices W1andW2and bias vectors b1andb2). A rectified-linear (ReLU) [ Glorot et al. ,
2011] activation function applied between the two linear transf ormations.
FFN( x, W 1, W2, b1, b2) = max(0 , xW 1+b1)W2+b2 (1)
Following the T5 codebase [ Raffel et al. ,2019]1, we use a version with no bias:
FFN ReLU(x, W 1, W2) = max( xW 1,0)W2 (2)
Subsequent work has proposed replacing the ReLU with other n onlinear activation functions such as
Gaussian Error Linear Units, GELU( x) =xΦ(x) [Hendrycks and Gimpel ,2016], and Swish β(x) =xσ(βx)
[Ramachandran et al. ,2017].
FFN GELU (x, W 1, W2) = GELU( xW 1)W2
FFN Swish(x, W 1, W2) = Swish 1(xW 1)W2(3)
2 Gated Linear Units (GLU) and Variants
[Dauphin et al. ,2016] introduced Gated Linear Units (GLU), a neural network laye r defined as the component-
wise product of two linear transformations of the input, one of which is sigmoid-activated. They also suggest
omitting the activation, which they call a "bilinear" layer and attribute to [ Mnih and Hinton ,2007].
GLU( x, W, V, b, c ) =σ(xW+b)⊗(xV+c)
Bilinear( x, W, V, b, c ) = ( xW+b)⊗(xV+c)(4)
We can also define GLU variants using other activation functi ons:
1Also in the interest of ML fairness.
1 |
978-3-642-41822-8-15.pdf | Auto-enco d er Based D ata C lustering
Chunfeng Song1,F e n gL i u2, Yongzhen Huang1, Liang Wang1, and Tieniu Tan1
1National Laboratory of Pattern Recognition (NLPR),
Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
2School of Automation, Southeast University, Nanjing, 210096, China
Ab st r a c t . Linear or non-linear data transformations are widely used
processing techniques in clustering. Usually, they are beneficial to en-
hancing data representation. However, if data have a complex structure,
thesetechniqueswouldbeunsatisfyingforclustering.Inthispaper,basedon the auto-encoder network, which can learn a highly non-linear map-
ping function, we propose a new clustering method. Via simultaneously
considering datareconstruction andcompactness, ourmethod canobtainstableand effectiveclustering. Experimentsonthreedatabasesshow that
the proposed clustering model achieves excellent performance in terms
of both accuracy and normalized mutual information.
Ke y w ords: Clustering, Auto-encoder, Non-linear transformation.
1 I n t r o duct i on
Data clustering[4] isa basicprobleminpatternrecognition,whosegoalis group-
ing similar data into the same cluster. It attracts much attention and various
clustering methods have been presented, most of which either deal with the
original data, e.g., K-means [10], its linear transformation, e.g., spectral cluster-
ing[7],oritssimplenon-lineartransformation,e.g.,kernelK-means[2].However,
if original data are not well distributed due to large intra-variance as shown inthe left part of Figure 1, it would be difficult for traditionalclustering algorithms
to achieve satisfying performance.
To address the above problem, we attempt to map original data space to a
new space which is more suitable for clustering. The auto-encoder network [1]
is a good candidate to handle this problem. It provides a non-linear mapping
function by iteratively learning the en coder and the decoder. The encoder is ac-
tually the non-linear mapping function, and the decoder demands accurate data
reconstruction from the representatio n generated by the encoder. This process
is iterative, which guarantees that the mapping function is stable and effective
to represent the original da ta. Different from kernel K -means [2], which also in-
troduces non-linear transformations with fixed kernel functions, the non-linearfunction in auto-encoder is learned by optimizing an objective function.
The auto-encoder network is originally designed for data representation, and
it aims to minimize the reconstruction error. However, to the best of our knowl-edge, though widely used, the auto-en coder network has not been utilized for
J. Ru iz -S h u lc lop e r a n d G. S a n n i ti d i B a j a (E d s .): C IARP 2013, P a rt I, LNC S 8258, p p . 117–124, 2013.
c⃝ Spring e r- V e rla g B e rlin He ide l b e rg 201 3 |
sutskever10a.pdf | 789On the Convergence Properties of Contrastive Divergence
Ilya Sutskever Tijmen Tieleman
University of Toronto University of Toronto
Abstract
Contrastive Divergence (CD) is a popular
method for estimating the parameters of
Markov Random Fields (MRFs) by rapidly
approximating an intractable term in the gra-
dient of the log probability. Despite CD’s
empirical success, little is known about its
theoretical convergence properties.
In this paper, we analyze the CD 1up-
date rule for Restricted Boltzmann Machines
(RBMs) with binary variables. We show that
this update is not the gradient of any func-
tion, and construct a counterintuitive “regu-
larization function” that causes CD learning
to cycle indefinitely. Nonetheless, we show
that the regularized CD update has a fixed
point for a large class of regularization func-
tions using Brower’s fixed point theorem.
1 INTRODUCTION
Markov Random Fields (MRFs) are an important class
of probabilistic models that are useful for denoising,
prediction, and density estimation (Cross and Jain,
1981; Malfait and Roose, 1997; Portilla et al., 2003;
Roth and Black, 2005; Li, 1994; Wainwright, 2008). In
particular, MRFs subsume the Restricted Boltzmann
Machines (RBMs) (Hinton, 2002; Smolensky, 1986),
which are essential for learning Deep Belief Networks
(Hinton et al., 2006; Bengio et al., 2007; Hinton and
Salakhutdinov, 2006).
Nearly every application of MRFs requires estimating
their parameters from data. The natural maximum-
likelihood parameter estimation is challenging, be-
cause the log probability’s gradient is the difference
of two expectations, of which one cannot be easily
computed. As a result, a number of approximate
Appearing in Proceedings of the 13thInternational Con-
ference on Artificial Intelligence and Statistics (AISTATS)
2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of
JMLR: W&CP 9. Copyright 2010 by the authors.parameter estimation methods have been developed.
Pseudolikelihood (Besag, 1977) and Score Matching
(Hyvarinen, 2006) are tractable alternatives to the
log probability objective which are easier to optimize,
and Loopy Belief Propagation and its variants (Wain-
wright, 2008) directly approximate the intractable ex-
pectation in the gradient. This paper focuses on Con-
trastive Divergence (CD) (Hinton, 2002), which di-
rectly approximates the intractable expectation with
an easy Monte Carlo estimate. Being trivial to imple-
ment, CD is widely used (Hinton et al., 2006), but its
convergence properties are not entirely understood.
In this paper we gain a better understanding of the
noiseless CD 1update rule for binary RBMs, and report
the following results:
•We provide two proofs showing that the CD up-
date is not the gradient of any objective function.
This result was first proved by Tieleman (2007)
and stated by Bengio and Delalleau (2009).
•We construct an example of a nonconvex regu-
larization function that causes the CD update to
cycle indefinitely.
•The CD update is shown to have at least one fixed
point when used with L 2regularization.
2 RELATED WORK
There has been much work attempting to elucidate
the convergence properties of CD. Some of this work
shows that CD minimizes a known cost function when
used with specific Markov chains. For example, if
the Markov chain used to estimate the intractable
expectation (Hinton, 2002) is the Langevin Monte
Carlo method, then CD computes the gradient of the
score-matching objective function; similarly, when the
Markov chain samples a random component of the
data vector from its conditional distribution, then CD
becomes the gradient of the log pseudo-likelihood of
the model (Hyvarinen, 2007). Other work has pro-
vided general conditions under which CD converges to
the maximum likelihood solution (Yuille, 2004), which |
2306.02572.pdf | Les Houches Summer School Lecture Notes 2022 Preprint
Introduction to Latent Variable Energy-Based Models:
A Path Towards Autonomous Machine Intelligence
Anna Dawid1,2and Yann LeCun3,4⋆
1ICFO - Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology,
Av. Carl Friedrich Gauss 3, 08860 Castelldefels (Barcelona), Spain
2Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland
3Courant Institute of Mathematical Sciences, New York University
4Meta - Fundamental AI Research
⋆yann@cs.nyu.edu
June 6, 2023
Abstract
Current automated systems have crucial limitations that need to be addressed before
artificial intelligence can reach human-like levels and bring new technological revolu-
tions. Among others, our societies still lack Level 5 self-driving cars, domestic robots,
and virtual assistants that learn reliable world models, reason, and plan complex ac-
tion sequences. In these notes, we summarize the main ideas behind the architecture
of autonomous intelligence of the future proposed by Yann LeCun. In particular, we in-
troduce energy-based and latent variable models and combine their advantages in the
building block of LeCun’s proposal, that is, in the hierarchical joint embedding predictive
architecture (H-JEPA).
Contents
1 Introduction 2
2 Towards autonomous machine intelligence 3
2.1 Applications of machine learning today 3
2.2 Limitations of the current machine learning 4
2.3 New paradigm to autonomous intelligence 6
2.4 Self-supervised learning and representing uncertainty 7
2.5 Structure of the manuscript 8
3 Introduction to energy-based models 8
3.1 Energy-based models vs. probabilistic models 10
3.2 Latent variable energy-based models 11
4 Training an energy-based model 12
4.1 Contrastive methods 14
4.2 Architectural and regularized methods 16
5 Examples of energy-based models 17
5.1 Hopfield networks 17
5.2 Boltzmann machines 17
1arXiv:2306.02572v1 [cs.LG] 5 Jun 2023 |
2306.16922.pdf | THEEXPRESSIVE LEAKY MEMORY NEURON :
ANEFFICIENT AND EXPRESSIVE PHENOMENOLOGICAL
NEURON MODEL CANSOLVE LONG -HORIZON TASKS
Aaron Spieler1,2, Nasim Rahaman3,2, Georg Martius1,2, Bernhard Schölkopf2, and Anna Levina1,4
1University of Tübingen
2Max Planck Institute for Intelligent Systems, Tübingen
3Mila, Quebec AI Institute
4Max Planck Institute for Biological Cybernetics, Tübingen
ABSTRACT
Biological cortical neurons are remarkably sophisticated computational devices,
temporally integrating their vast synaptic input over an intricate dendritic tree,
subject to complex, nonlinearly interacting internal biological processes. A recent
study proposed to characterize this complexity by fitting accurate surrogate models
to replicate the input-output relationship of a detailed biophysical cortical pyramidal
neuron model and discovered it needed temporal convolutional networks (TCN)
with millions of parameters. Requiring these many parameters, however, could
be the result of a misalignment between the inductive biases of the TCN and
cortical neuron’s computations. In light of this, and with the aim to explore
the computational implications of leaky memory units and nonlinear dendritic
processing, we introduce the Expressive Leaky Memory (ELM) neuron model, a
biologically inspired phenomenological model of a cortical neuron. Remarkably, by
exploiting a few such slowly decaying memory-like hidden states and two-layered
nonlinear integration of synaptic input, our ELM neuron can accurately match
the aforementioned input-output relationship with under ten-thousand trainable
parameters. To further assess the computational ramifications of our neuron design,
we evaluate on various tasks with demanding temporal structures, including the
Long Range Arena (LRA) datasets, as well as a novel neuromorphic dataset based
on the Spiking Heidelberg Digits dataset (SHD-Adding). Leveraging a larger
number of memory units with sufficiently long timescales, and correspondingly
sophisticated synaptic integration, the ELM neuron proves to be competitive on
both datasets, reliably outperforming the classic Transformer or Chrono-LSTM
architectures on latter, even solving the Pathfinder-X task with over 70% accuracy
(16k context length). These findings indicate the importance of inductive biases
for efficient surrogate neuron models and the potential for biologically motivated
models to enhance performance in challenging machine learning tasks.
1 I NTRODUCTION
The human brain has impressive computational capabilities, yet the precise mechanisms underpinning
them remain largely undetermined. Two complementary directions are pursued in search of
mechanisms for brain computations. On the one hand, many researchers investigate how these
capabilities could arise from the collective activity of neurons connected into a complex network
structure Maass (1997); Gerstner & Kistler (2002); Grüning & Bohte (2014), where individual
neurons might be as basic as leaky integrators or ReLU neurons. On the other hand, it has been
proposed that the intrinsic computational power possessed by individual neurons Koch (1997); Koch
& Segev (2000); Silver (2010) contributes a significant part to the computations.
Even though most work focuses on the former hypothesis, an increasing amount of evidence indicates
that cortical neurons are remarkably sophisticated Silver (2010); Gidon et al. (2020); Larkum (2022),
1arXiv:2306.16922v2 [cs.NE] 10 Oct 2023 |
2004.04906.pdf | Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin∗, Barlas O ˘guz∗, Sewon Min†, Patrick Lewis,
Ledell Wu, Sergey Edunov, Danqi Chen‡, Wen-tau Yih
Facebook AI†University of Washington‡Princeton University
{vladk, barlaso, plewis, ledell, edunov, scottyih }@fb.com
sewon@cs.washington.edu
danqic@cs.princeton.edu
Abstract
Open-domain question answering relies on ef-
ficient passage retrieval to select candidate
contexts, where traditional sparse vector space
models, such as TF-IDF or BM25, are the de
facto method. In this work, we show that
retrieval can be practically implemented us-
ingdense representations alone, where em-
beddings are learned from a small number
of questions and passages by a simple dual-
encoder framework. When evaluated on a
wide range of open-domain QA datasets, our
dense retriever outperforms a strong Lucene-
BM25 system greatly by 9%-19% absolute in
terms of top-20 passage retrieval accuracy, and
helps our end-to-end QA system establish new
state-of-the-art on multiple open-domain QA
benchmarks.1
1 Introduction
Open-domain question answering (QA) (V oorhees,
1999) is a task that answers factoid questions us-
ing a large collection of documents. While early
QA systems are often complicated and consist of
multiple components (Ferrucci (2012); Moldovan
et al. (2003), inter alia ), the advances of reading
comprehension models suggest a much simplified
two-stage framework: (1) a context retriever first
selects a small subset of passages where some
of them contain the answer to the question, and
then (2) a machine reader can thoroughly exam-
ine the retrieved contexts and identify the correct
answer (Chen et al., 2017). Although reducing
open-domain QA to machine reading is a very rea-
sonable strategy, a huge performance degradation
is often observed in practice2, indicating the needs
of improving retrieval.
∗Equal contribution
1The code and trained models have been released at
https://github.com/facebookresearch/DPR.
2For instance, the exact match score on SQuAD v1.1 drops
from above 80% to less than 40% (Yang et al., 2019a).Retrieval in open-domain QA is usually imple-
mented using TF-IDF or BM25 (Robertson and
Zaragoza, 2009), which matches keywords effi-
ciently with an inverted index and can be seen
as representing the question and context in high-
dimensional, sparse vectors (with weighting). Con-
versely, the dense , latent semantic encoding is com-
plementary to sparse representations by design. For
example, synonyms or paraphrases that consist of
completely different tokens may still be mapped to
vectors close to each other. Consider the question
“Who is the bad guy in lord of the rings?” , which can
be answered from the context “Sala Baker is best
known for portraying the villain Sauron in the Lord
of the Rings trilogy. ” A term-based system would
have difficulty retrieving such a context, while
a dense retrieval system would be able to better
match “bad guy” with “villain” and fetch the cor-
rect context. Dense encodings are also learnable
by adjusting the embedding functions, which pro-
vides additional flexibility to have a task-specific
representation. With special in-memory data struc-
tures and indexing schemes, retrieval can be done
efficiently using maximum inner product search
(MIPS) algorithms (e.g., Shrivastava and Li (2014);
Guo et al. (2016)).
However, it is generally believed that learn-
ing a good dense vector representation needs a
large number of labeled pairs of question and con-
texts. Dense retrieval methods have thus never
be shown to outperform TF-IDF/BM25 for open-
domain QA before ORQA (Lee et al., 2019), which
proposes a sophisticated inverse cloze task (ICT)
objective, predicting the blocks that contain the
masked sentence, for additional pretraining. The
question encoder and the reader model are then fine-
tuned using pairs of questions and answers jointly.
Although ORQA successfully demonstrates that
dense retrieval can outperform BM25, setting new
state-of-the-art results on multiple open-domainarXiv:2004.04906v3 [cs.CL] 30 Sep 2020 |
427986745-768441298640104-1604906292521363076-n.pdf | Revisiting Feature Prediction for Learning Visual
Representations from Video
Adrien Bardes1,2,3,Quentin Garrido1,4,Jean Ponce3,5,6,Xinlei Chen1,Michael Rabbat1,Yann LeCun1,5,6,
Mahmoud Assran1,†,Nicolas Ballas1,†
1FAIR at Meta,2Inria,3École normale supérieure, CNRS, PSL Research University,4Univ. Gustave Eiffel,
CNRS, LIGM,5Courant Institute, New York University,6Center for Data Science, New York University
†Joint last author
This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and
introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without
the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision.
The models are trained on 2 million videos collected from public datasets and are evaluated on downstream
image and video tasks. Our results show that learning by predicting video features leads to versatile visual
representations that perform well on both motion and appearance-based tasks, without adaption of the
model’s parameters; e.g., using a frozen backbone. Our largest model, a ViT-H/16 trained only on videos,
obtains 81.9%on Kinetics-400, 72.2%on Something-Something-v2, and 77.9%on ImageNet1K.
Date:February 14, 2024
Correspondence: {abardes, massran, ballasn}@meta.com
Code:https://github.com/facebookresearch/jepa
Blogpost: Click here
1 Introduction
Humans possess the remarkable ability to map low-level
signals originating from the retina into a semantic spatio-
temporal understanding of the world; synthesizing no-
tions such as objects and global motion (Spelke et al.,
1995). A long-standing goal of the machine learning
community is to identify the principles or objectives that
may guide such unsupervised learning in humans (Field,
1994; Berkes and Wiskott, 2005; Hinton, 1989). One
related hypothesis is based on the predictive feature
principle (Rao and Ballard, 1999), which posits that
representations of temporally adjacent sensory stimuli
should be predictive of each other.
In this work, we revisit feature prediction as a stand-
alone objective for unsupervised learning of visual repre-
sentations from video. Numerous advances in the field —
such as the standard use of transformer architectures in
vision (Dosovitskiy et al., 2020), the maturing of masked
autoencoding frameworks (Xie et al., 2021; Bao et al.,
2021; He et al., 2021), query-based feature pooling (Chen
et al., 2022), joint-embedding predictive architectures
(JEPA) (LeCun, 2022; Assran et al., 2023; Baevski et al.,
2022b), and larger datasets — form a unique arsenal of
tools, which we integrate in a modern and conceptually
simple method, the video joint-embedding predictive ar-
chitecture orV-JEPA, which is based solely on feature
prediction, without using pretrained image encoders,
text, negative examples, human annotations, or pixel-
70727476788082848688909240506070SOTA fine-tuned task-specific
model on SSv 2(MVD)
SOTA fine-tuned
task-specific model
on K 400 (UniFormer)ViT-L/ 16V-JEPA
ViT-H/ 16
DINOv 2
ViT-g/ 14
OpenCLIP
ViT-G/ 14I-JEPA
ViT-H/ 16Hiera
Hiera-HVideoMAE
ViT-H/ 16
VideoMAEv 2
ViT-g/ 14OmniMAE
ViT-H/ 16
Kinetics 400Something-Something-v 2Frozen Evaluation
Video Feature Pred.
Video Pixel Pred.
Image ModelsFigure 1V-JEPA models pretrained on video learn versatile
visual representations. It performs well on motion-based
tasks (Something-Something-v2) and appearance-based tasks
(Kinetics 400) without adaptation of the model’s parameters,
i.e., using the same frozen backbone for both tasks.
level reconstruction.
We seek to answer the simple question:
How effective is feature prediction as a stand-
alone objective for unsupervised learning from
video with modern tools?
1 |
2206.02326.pdf | arXiv:2206.02326v1 [cs.LG] 6 Jun 2022Asymptotic Instance-Optimal Algorithms for Interactive D ecision
Making
Kefan Dong
Stanford University
kefandong@stanford.eduTengyu Ma
Stanford University
tengyuma@stanford.edu
June 7, 2022
Abstract
Past research on interactive decision making problems (ban dits, reinforcement learning, etc.) mostly focuses on
the minimax regret that measures the algorithm’s performan ce on the hardest instance. However, an ideal algorithm
should adapt to the complexity of a particular problem insta nce and incur smaller regrets on easy instances than
worst-case instances. In this paper, we design the first asym ptotic instance-optimal algorithm for general interactiv e
decision making problems with finite number of decisions und er mild conditions. On every instancef, our algorithm
outperforms allconsistent algorithms (those achieving non-trivial regre ts on all instances), and has asymptotic regret
C(f)lnn, whereC(f)is an exact characterization of the complexity of f. The key step of the algorithm involves
hypothesis testing with active data collection. It compute s the most economical decisions with which the algorithm
collects observations to test whether an estimated instanc e is indeed correct; thus, the complexity C(f)is the mini-
mum cost to test the instance fagainst other instances. Our results, instantiated on conc rete problems, recover the
classical gap-dependent bounds for multi-armed bandits [ Lai and Robbins ,1985 ] and prior works on linear bandits
[Lattimore and Szepesvari ,2017 ], and improve upon the previous best instance-dependent up per bound [ Xu et al. ,
2021 ] for reinforcement learning.
1 Introduction
Bandit and reinforcement learning (RL) algorithms demonst rated a wide range of successful real-life applications
[Silver et al. ,2016 ,2017 ,Mnih et al. ,2013 ,Berner et al. ,2019 ,Vinyals et al. ,2019 ,Mnih et al. ,2015 ,Degrave et al. ,
2022 ]. Past works have theoretically studied the regret or sampl e complexity of various interactive decision making
problems, such as contextual bandits, reinforcement learn ing (RL), partially observable Markov decision process
(seeAzar et al. [2017 ],Jin et al. [2018 ],Dong et al. [2021 ],Li et al. [2019 ],Agarwal et al. [2014 ],Foster and Rakhlin
[2020 ],Jin et al. [2020 ], and references therein). Recently, Foster et al. [2021 ] present a unified algorithmic principle
for achieving the minimax regret—the optimal regret for the worst-case problem instances.
However, minimax regret bounds do not necessarily always pr esent a full picture of the statistical complexity of
the problem. They characterize the complexity of the most di fficult instances, but potentially many other instances are
much easier. An ideal algorithm should adapt to the complexi ty of a particular instance and incur smaller regrets on
easy instances than the worst-case instances. Thus, an idea l regret bound should be instance-dependent, that is, de-
pending on some properties of each instance. Prior works des igned algorithms with instance-dependent regret bounds
that are stronger than minimax regret bounds, but they are of ten not directly comparable because they depend on differ-
ent properties of the instances, such as the gap conditions a nd the variance of the value function [ Zanette and Brunskill ,
2019 ,Xu et al. ,2021 ,Foster et al. ,2020 ,Tirinzoni et al. ,2021 ].
A more ambitious and challenging goal is to design instance-optimal algorithms that outperform, on every instance,
allconsistent algorithms (those achieving non-trivial regre ts on all instances). Past works designed instance-optimal
algorithms for multi-armed bandit [ Lai and Robbins ,1985 ], linear bandits [ Kirschner et al. ,2021 ,Hao et al. ,2020 ],
Lipschitz bandits [ Magureanu et al. ,2014 ], and ergodic MDPs [ Ok et al. ,2018 ]. However, instance-optimal regret
bounds for tabular reinforcement learning remain an open qu estion, despite recent progress [ Tirinzoni et al. ,2021 ,
1 |
2205.05131.pdf | UL2: Unifying Language Learning Paradigms
Yi Tay∗Mostafa Dehghani∗
Vinh Q. Tran♯Xavier Garcia♯Jason Wei♯Xuezhi Wang♯Hyung Won Chung♯
Siamak Shakeri♯Dara Bahri♭Tal Schuster♭Huaixiu Steven Zheng△
Denny Zhou△Neil Houlsby△Donald Metzler△
Google Brain
Abstract
Existing pre-trained models are generally geared towards a particular class of problems. To date, there
seemstobestillnoconsensusonwhattherightarchitectureandpre-trainingsetupshouldbe. Thispaper
presents a unified framework for pre-training models that are universally effective across datasets and
setups. We begin by disentangling architectural archetypes with pre-training objectives – two concepts
thatarecommonlyconflated. Next,wepresentageneralizedandunifiedperspectiveforself-supervision
inNLPandshowhowdifferentpre-trainingobjectivescanbecastasoneanotherandhowinterpolating
between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training
objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of
mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We
conductextensiveablativeexperimentstocomparemultiplepre-trainingobjectivesandfindthatourmethod
pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups.
Finally,byscalingourmodelupto20Bparameters,weachieveSOTAperformanceon50well-established
supervisedNLPtasksrangingfromlanguagegeneration(withautomatedandhumanevaluation),language
understanding, text classification, question answering, commonsense reasoning, long text reasoning, struc-
tured knowledge grounding and information retrieval. Our model also achieve strong results at in-context
learning,outperforming175BGPT-3(publishedpaperresults)onzero-shotSuperGLUEandtriplingthe
performance of T5-XXL on one-shot summarization. On zero-shot MMLU, UL2 20B outperforms T0 and T5
models. Additionally,weshowthatUL220Bworkswellwithchain-of-thoughtpromptingandreasoning,
making it an appealing choice for research into reasoning at a small to medium scale of 20B parameters.
Finally,weapplyFLANinstructiontuningtotheUL220Bmodel,achievingMMLUandBig-Benchscores
competitive to FLAN-PaLM 62B. We release Flax-based T5X model checkpoints for the UL2 20B model and
Flan-UL2 20B model at https://github.com/google-research/google-research/tree/master/ul2 .
∗Yi and Mostafa are co-leads of this project and are denoted with∗.♯denotes technical research contributors. ♭denotes data &
infrastructure contributions.△denotes advising contributions. Don, denoted with□is the last author. Full contributions of all authors
at the end of paper. Correspondence to yitay@google.com ordehghani@google.com .
1arXiv:2205.05131v3 [cs.CL] 28 Feb 2023 |
2304.01373.pdf | Pythia : A Suite for Analyzing Large Language Models
Across Training and Scaling
Stella Biderman* 1 2Hailey Schoelkopf* 1 3Quentin Anthony1Herbie Bradley1 4Kyle O’Brien1
Eric Hallahan1Mohammad Aflah Khan5Shivanshu Purohit6 1USVSN Sai Prashanth1Edward Raff2
Aviya Skowron1Lintang Sutawika1 7Oskar van der Wal8
Abstract
How do large language models (LLMs) develop
and evolve over the course of training? How do
these patterns change as models scale? To an-
swer these questions, we introduce Pythia , a suite
of 16 LLMs all trained on public data seen in
the exact same order and ranging in size from
70M to 12B parameters. We provide public ac-
cess to 154 checkpoints for each one of the 16
models, alongside tools to download and recon-
struct their exact training dataloaders for further
study. We intend Pythia to facilitate research in
many areas, and we present several case stud-
ies including novel results in memorization, term
frequency effects on few-shot performance, and
reducing gender bias. We demonstrate that this
highly controlled setup can be used to yield novel
insights toward LLMs and their training dynam-
ics. Trained models, analysis code, training
code, and training data can be found at https:
//github.com/EleutherAI/pythia .
1. Introduction
Over the past several years, large transformer models have
established themselves as the premier methodology for gen-
erative tasks in natural language processing (Brown et al.,
2020; Sanh et al., 2021; Chowdhery et al., 2022). Beyond
NLP, transformers have also made big splashes as genera-
tive models in areas as diverse as text-to-image synthesis
(Ramesh et al., 2022; Crowson et al., 2022; Rombach et al.,
*Equal contribution1EleutherAI2Booz Allen Hamilton,
McLean, USA3Yale University, New Haven, USA4University of
Cambridge, UK5Indraprastha Institute of Information Technology
Delhi, India6Stability AI7Datasaur.ai, USA8Institute for Logic,
Language and Computation, University of Amsterdam, Nether-
lands. Correspondence to: Stella Biderman <stella@eleuther.ai >,
Hailey Schoelkopf <hailey@eleuther.ai >.
Proceedings of the 40thInternational Conference on Machine
Learning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).2022), protein modeling (Jumper et al., 2021; Ahdritz et al.,
2022), and computer programming (Chen et al., 2021; Xu
et al., 2022; Fried et al., 2022). Despite these successes,
very little is known about how and why these models are so
successful.
Critical to understanding the functioning of transformers
is better understanding how these models behave along
two axes: training and scaling. It is well established that
there are regular and predictable patterns in the behavior of
trained language models as they scale (Kaplan et al., 2020;
Henighan et al., 2020; Hernandez et al., 2021; Mikami et al.,
2021; Pu et al., 2021; Sharma & Kaplan, 2020; Ghorbani
et al., 2021), but prior work connecting these “Scaling Laws”
to the learning dynamics of language models is minimal.
One of the driving reasons for this gap in research is a lack
of access to appropriate model suites to test theories: al-
though there are more publicly available LLMs than ever,
they do not meet common requirements for researchers, as
discussed in Section 2 of this paper. Of the research along
these lines that does exist (McGrath et al., 2021; Tirumala
et al., 2022; Xia et al., 2022), it is overwhelmingly done on
non-public models or model checkpoints, further emphasiz-
ing the importance of having publicly available model suites
for scientific research.
In this paper we introduce Pythia , a suite of decoder-only
autoregressive language models ranging from 70M to 12B
parameters designed specifically to facilitate such scientific
research. The Pythia suite is the only publicly released suite
of LLMs that satisfies three key properties:
1.Models span several orders of magnitude of model
scale.
2.All models were trained on the same data in the same
order.
3.The data and intermediate checkpoints are publicly
available for study.
We train 8 model sizes each on both the Pile (Gao et al.,
2020; Biderman et al., 2022) and the Pile after deduplication,
providing 2 copies of the suite which can be compared.
1arXiv:2304.01373v2 [cs.CL] 31 May 2023 |
2306.14846.pdf | ViNT: A Foundation Model for Visual Navigation
Dhruv Shah†, Ajay Sridhar†, Nitish Dashora†,
Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
UC Berkeley
Abstract: General-purpose pre-trained models (“foundation models”) have en-
abled practitioners to produce generalizable solutions for individual machine
learning problems with datasets that are significantly smaller than those required
for learning from scratch. Such models are typically trained on large and diverse
datasets with weak supervision, consuming much more training data than is avail-
able for any individual downstream application. In this paper, we describe the
Visual Navigation Transformer (ViNT), a foundation model that aims to bring
the success of general-purpose pre-trained models to vision-based robotic navi-
gation. ViNT is trained with a general goal-reaching objective that can be used
with any navigation dataset, and employs a flexible Transformer-based architec-
ture to learn navigational affordances and enable efficient adaptation to a variety
of downstream navigational tasks. ViNT is trained on a number of existing naviga-
tion datasets, comprising hundreds of hours of robotic navigation from a variety of
different robotic platforms, and exhibits positive transfer , outperforming special-
ist models trained on narrower datasets. ViNT can be augmented with diffusion-
based goal proposals to explore novel environments, and can solve kilometer-scale
navigation problems when equipped with long-range heuristics. ViNT can also be
adapted to novel task specifications with a technique inspired by prompt-tuning,
where the goal encoder is replaced by an encoding of another task modality (e.g.,
GPS waypoints or turn-by-turn directions) embedded into the same space of goal
tokens. This flexibility and ability to accommodate a variety of downstream prob-
lem domains establish ViNT as an effective foundation model for mobile robotics.
Training Data Zero-Shot Deployment
ViNT Foundation Model
Adapt to Downstream T asks
Kilometer-Scale Exploration
Route-Guided Navigation
Coverage Mapping
Figure 1: Overview of the ViNT foundation model. ViNT generalizes zero-shot across environments and
robot embodiments, and can be directly applied to tasks including exploration and navigation around humans.
ViNT can also be fine-tuned with a small amount of data to expand its capabilities to new tasks.
1 Introduction
Recently, machine learning methods have achieved broad success in natural language processing [1],
visual perception [2–4], and other domains [5, 6] by leveraging Internet-scale data to train general-
purpose “foundation” models that can be adapted to new tasks by zero-shot transfer, prompt-tuning,
or fine-tuning on target data [7–10]. Although this paradigm has been successful in many domains,
it is difficult to apply in robotics due to the sheer diversity of environments, platforms, and applica-
tions. In this paper we ask the question: what is required of a foundation model for mobile robotics?
†Lead Authors. Videos, code, and models: general-navigation-models.github.io.
7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.arXiv:2306.14846v2 [cs.RO] 24 Oct 2023 |
2207.08286.pdf | An Overview of Distant Supervision for Relation Extraction with a Focus
on Denoising and Pre-training Methods
William P Hogan
Department of Computer Science & Engineering
University of California, San Diego
Abstract
Relation Extraction (RE) is a foundational
task of natural language processing. RE
seeks to transform raw, unstructured text into
structured knowledge by identifying relational
information between entity pairs found in
text. RE has numerous uses, such as knowl-
edge graph completion, text summarization,
question-answering, and search querying. The
history of RE methods can be roughly or-
ganized into four phases: pattern-based RE,
statistical-based RE, neural-based RE, and
large language model-based RE. This survey
begins with an overview of a few exemplary
works in the earlier phases of RE, highlight-
ing limitations and shortcomings to contex-
tualize progress. Next, we review popular
benchmarks and critically examine metrics
used to assess RE performance. We then dis-
cuss distant supervision, a paradigm that has
shaped the development of modern RE meth-
ods. Lastly, we review recent RE works focus-
ing on denoising and pre-training methods.
1 Introduction
Relation extraction (RE), a subtask of information
extraction, is a foundational task in natural lan-
guage processing (NLP). The RE task is to deter-
mine a relationship between two distinct entities
from text, producing fact triples in the form [ head ,
relation ,tail] or, as referred to in some works, [ sub-
ject,predicate ,object ]. For example, after reading
the Wikipedia page on Noam Chomsky, we learn
that Noam was born in Philadelphia, Pennsylvania,
which corresponds to the fact triple [ Noam Chom-
sky,born in ,Philadelphia ]. Fact triples are foun-
dational to human knowledge and play a key role
in many downstream NLP tasks such as question-
answering, search queries, and knowledge-graph
completion (Xu et al., 2016; Lin et al., 2015; Li
et al., 2014).
Figure 1: A sample of relation labels generated by dis-
tant supervision. A knowledge graph, here a small snip-
pet from WikiData (Vrande ˇci´c and Kr ¨otzsch, 2014), is
paired with sentences to label instances of relations
linking an entity pair.
Distant supervision for relation extraction is a
method that pairs a knowledge graph—a graph of
entities connected by edges labeled with relation
classes—with an unstructured corpus to generate la-
beled data automatically (Mintz et al., 2009). First,
entities from the knowledge graph are identified in
the text, and then the following assumption is made:
all sentences containing an entity pair express the
corresponding relation class, as determined by the
accompanying knowledge graph. Figure 1 pro-
vides an example of automatically generated re-
lation labels pairing a knowledge graph, namely
WikiData (Vrande ˇci´c and Kr ¨otzsch, 2014), with
raw sentences.
Formally, the problem statement for distantly su-arXiv:2207.08286v1 [cs.CL] 17 Jul 2022 |
2210.10760.pdf | Scaling Laws for Reward Model Overoptimization
Leo Gao
OpenAIJohn Schulman
OpenAIJacob Hilton
OpenAI
Abstract
In reinforcement learning from human feedback, it is common to optimize against
a reward model trained to predict human preferences. Because the reward model
is an imperfect proxy, optimizing its value too much can hinder ground truth
performance, in accordance with Goodhart’s law. This effect has been frequently
observed, but not carefully measured due to the expense of collecting human
preference data. In this work, we use a synthetic setup in which a fixed “gold-
standard” reward model plays the role of humans, providing labels used to train a
proxy reward model. We study how the gold reward model score changes as we
optimize against the proxy reward model using either reinforcement learning or
best-of-nsampling. We find that this relationship follows a different functional
form depending on the method of optimization, and that in both cases its coefficients
scale smoothly with the number of reward model parameters. We also study the
effect on this relationship of the size of the reward model dataset, the number of
reward model and policy parameters, and the coefficient of the KL penalty added
to the reward in the reinforcement learning setup. We explore the implications of
these empirical results for theoretical considerations in AI alignment.
1 Introduction
Goodhart’s law is an adage that states, “When a measure becomes a target, it ceases to be a good
measure.” In machine learning, this effect arises with proxy objectives provided by static learned
models, such as discriminators and reward models. Optimizing too much against such a model
eventually hinders the true objective, a phenomenon we refer to as overoptimization . It is important to
understand the size of this effect and how it scales, in order to predict how much a learned model can
be safely optimized against. Moreover, studying this effect empirically could aid in the development
of theoretical models of Goodhart’s law for neural networks, which could be critical for avoiding
dangerous misalignment of future AI systems.
In this work, we study overoptimization in the context of large language models fine-tuned as
reward models trained to predict which of two options a human will prefer. Such reward models
have been used to train language models to perform a variety of complex tasks that are hard to
judge automatically, including summarization [Stiennon et al., 2020], question-answering [Nakano
et al., 2021, Menick et al., 2022], and general assistance [Ouyang et al., 2022, Bai et al., 2022,
Glaese et al., 2022]. Typically, the reward model score is optimized using either policy gradient-
based reinforcement learning or best-of- nsampling, also known as rejection sampling or reranking.
Overoptimization can occur with both methods, and we study both to better understand whether and
how overoptimization behaves differently across both methods.
A major challenge in studying overoptimization in this context is the expense of collecting human
preference labels. A large number of labels are required to accurately estimate overall preference
probabilities, and this is exacerbated by small effect sizes and the need to take many measurements in
order to fit scaling laws. To overcome this, we use a synthetic setup that is described in Section 2, in
which labels are supplied by a “gold-standard” reward model (RM) instead of humans.
Preprint. Under review.arXiv:2210.10760v1 [cs.LG] 19 Oct 2022 |
10.1038.s41588-023-01649-8.pdf | Nature Genetics | Volume 56 | March 2024 | 483–492
483
nature geneticshttps://doi.org/10.1038/s41588-023-01649-8
Article
In vitro reconstitution of chromatin domains
shows a role for nucleosome positioning in
3D genome organization
Elisa Oberbeckmann 1,4 , Kimberly Quililan2,3,4, Patrick Cramer1 &
A. Marieke Oudelaar 2
Eukaryotic genomes are organized into chromatin domains. The molecular
mechanisms driving the formation of these domains are difficult to dissect in vivo and remain poorly understood. Here we reconstitute Saccharomyces cerevisiae chromatin in vitro and determine its 3D organization at subnucleosome resolution by micrococcal nuclease-based chromosome conformation capture and molecular dynamics simulations. We show that regularly spaced and phased nucleosome arrays form chromatin domains in vitro that resemble domains in vivo. This demonstrates that neither loop extrusion nor transcription is required for basic domain formation in yeast. In addition, we find that the boundaries of reconstituted domains correspond to nucleosome-free regions and that insulation strength scales with their width. Finally, we show that domain compaction depends on nucleosome linker length, with longer linkers forming more compact structures. Together, our results demonstrate that regular nucleosome positioning is important for the formation of chromatin domains and provide a proof-of-principle for bottom-up 3D genome studies.
The spatial organization of the genome modulates nuclear processes,
including transcription, replication and DNA repair. Eukaryotic
genomes are organized into chromatin structures across different
scales. The smallest unit of chromatin is the nucleosome core parti-
cle, which consists of 147 base pairs (bp) of DNA wrapped around a
histone octamer1–3. Nucleosome core particles are connected by short
DNA ‘linkers’ and form nucleosome arrays, which further organize
into secondary structures that define the orientation of subsequent
nucleosomes with respect to each other. At a larger scale, eukaryotic
genomes organize into self-interacting domains. In mammals, these
domains are formed by at least two distinct mechanisms4. First, active
and inactive regions of chromatin form functionally distinct com -
partments that span a wide range of sizes5. Second, a process of loop
extrusion, mediated by cohesin and CCCTC binding factor (CTCF),
organizes the genome into local structures termed topologically associating domains (TADs), which usually range from 100 kbp to
1 Mbp in size6,7.
The higher-order organization of the genome into self-interacting
domains is conserved in eukaryotes with smaller genomes, includ -
ing Drosophila melanogaster8 and Saccharomyces cerevisiae9,10, in
which domain sizes range from 10 to 500 kbp and 2 to 10 kbp, respec -
tively. These domains are usually referred to with the general terms
chromatin domain, chromosomal domain or chromosomal interac -
tion domain8,10. This reflects that the nature of the domains in these
species and the mechanisms by which they are formed are less well
understood. Hereafter, we will therefore adopt the general term chro -
matin domain to refer to these domains. Because the boundaries of
chromatin domains in fly8,11,12 and yeast10,13 frequently overlap with
promoters of highly transcribed genes, it has been proposed that the
process of transcription or the transcriptional state of chromatin Received: 30 March 2023
Accepted: 15 December 2023
Published online: 30 January 2024
Check for updates
1Max Planck Institute for Multidisciplinary Sciences, Department of Molecular Biology, Göttingen, Germany. 2Max Planck Institute for Multidisciplinary
Sciences, Genome Organization and Regulation, Göttingen, Germany. 3Present address: The Francis Crick Institute, London, UK. 4These authors
contributed equally: Elisa Oberbeckmann, Kimberly Quililan. e-mail: elisa.oberbeckmann@mpinat.mpg.de; marieke.oudelaar@mpinat.mpg.de |
2306.01708.pdf | Resolving Interference When Merging Models
Prateek Yadav1Derek Tam1
Leshem Choshen2Colin Raffel1Mohit Bansal1
1University of North Carolina at Chapel Hill2IBM Research
leshem.choshen@il.ibm.com
{praty,dtredsox,craffel,mbansal}@cs.unc.edu
Abstract
Transfer learning – i.e., further fine-tuning a pre-trained model on a downstream
task – can confer significant advantages, including improved downstream perfor-
mance, faster convergence, and better sample efficiency. These advantages have
led to a proliferation of task-specific fine-tuned models, which typically can only
perform a single task and do not benefit from one another. Recently, model merging
techniques have emerged as a solution to combine multiple task-specific models
into a single multitask model without performing additional training. However,
existing merging methods often ignore the interference between parameters of
different models, resulting in large performance drops when merging multiple
models. In this paper, we demonstrate that prior merging techniques inadvertently
lose valuable information due to two major sources of interference: (a) interfer-
ence due to redundant parameter values and (b) disagreement on the sign of a
given parameter’s values across models. To address this, we propose our method,
TRIM, ELECT SIGN& M ERGE (TIES-MERGING ), which introduces three novel
steps when merging models: (1) resetting parameters that only changed a small
amount during fine-tuning, (2) resolving sign conflicts, and (3) merging only the
parameters that are in alignment with the final agreed-upon sign. We find that
TIES-MERGING outperforms several existing methods in diverse settings covering
a range of modalities, domains, number of tasks, model sizes, architectures, and
fine-tuning settings. We further analyze the impact of different types of interference
on model parameters, and highlight the importance of resolving sign interference.1
1 Introduction
Pre-trained models (PTMs) have become widespread in many real-world applications [ 83,6]. Using
PTMs typically involves fine-tuning them to specialize on a specific task [ 65,12], which can lead to
improved performance with less task-specific labeled data. These benefits have resulted in the release
of thousands of finetuned checkpoints [ 75] derived from popular PTMs such as ViT [ 14] for vision
and T5 [ 54] for language. However, having a separate fine-tuned model for each task has various
drawbacks: (1) for each new application, a separate model has to be stored and deployed [ 17,81], and
(2) models trained in isolation cannot leverage information from related tasks to improve in-domain
performance or out-of-domain generalization [ 62,54,70]. Multitask learning [ 62,53] could address
these concerns but requires costly training and simultaneous access to all tasks [ 17]. Moreover, it can
be complex and resource-intensive to determine how best to mix datasets to ensure that multitask
training is beneficial for all tasks [51, 50, 74, 48, 2, 17].
Recently, a growing body of research has focused on model merging [9,29,31,43,28,76]. One
application of merging involves combining multiple task-specific models into a single multitask
1Our code is available at https://github.com/prateeky2806/ties-merging
Preprint. Under review.arXiv:2306.01708v1 [cs.LG] 2 Jun 2023 |
2312.10003.pdf | REST MEETS REACT: S ELF-IMPROVEMENT FOR
MULTI -STEPREASONING LLM A GENT
Renat Aksitov†1, Sobhan Miryoosefi†1, Zonglin Li†1, Daliang Li†1, Sheila Babayan†2,
Kavya Kopparapu†2, Zachary Fisher1, Ruiqi Guo1, Sushant Prakash1, Pranesh Srinivasan3,
Manzil Zaheer2, Felix Yu1, and Sanjiv Kumar1
1Google Research,2Google DeepMind,3Google
†Core contributors
ABSTRACT
Answering complex natural language questions often necessitates multi-step rea-
soning and integrating external information. Several systems have combined
knowledge retrieval with a large language model (LLM) to answer such questions.
These systems, however, suffer from various failure cases, and we cannot directly
train them end-to-end to fix such failures, as interaction with external knowledge
is non-differentiable. To address these deficiencies, we define a ReAct-style LLM
agent with the ability to reason and act upon external knowledge. We further
refine the agent through a ReST-like method that iteratively trains on previous tra-
jectories, employing growing-batch reinforcement learning with AI feedback for
continuous self-improvement and self-distillation. Starting from a prompted large
model and after just two iterations of the algorithm, we can produce a fine-tuned
small model that achieves comparable performance on challenging compositional
question-answering benchmarks with two orders of magnitude fewer parameters.
1 I NTRODUCTION
Figure 1: Agent self-improvement and self-distillation.
Bamboogle auto-eval, mean accuracy and standard de-
viation over 10 runs, (%)For many simple natural language tasks,
like basic question-answering or summa-
rization, we can relatively easily decide
whether the final output is good or bad,
collect large amounts of such data, and
train the language models using these out-
comes as feedback. At the same time, for
more complex problems, outcome-based
systems are often insufficient, and a pro-
cess supervision approach has recently
gained much attention as a more promis-
ing alternative (Reppert et al. (2023)).
There is explosive growth in techniques
(Gao et al. (2023); Madaan et al. (2023)),
frameworks (Dohan et al. (2022); Khattab
et al. (2023b)), and libraries (Liu (2022),
Chase (2022)) for defining process-based
workflows with LLMs through human-understandable task decompositions. Many such decomposi-
tions involve interaction with external tools / APIs / environments, in which case the corresponding
multi-step workflow is generally referred to as an LLM agent (Xi et al. (2023)), a system capable of
performing a sequence of actions to achieve a goal.
Let’s consider the task of answering complex, open-ended questions, where the agent needs to use a
search API to look up multiple pieces of information before composing a paragraph-length answer.
One popular approach for building such agents with LLMs is the ReAct method (Yao et al., 2022),
which involves interleaving chain-of-thought reasoning with actions and observations during sev-
1arXiv:2312.10003v1 [cs.CL] 15 Dec 2023 |
2310.11564.pdf | PERSONALIZED SOUPS : PERSONALIZED LARGE
LANGUAGE MODEL ALIGNMENT VIA POST-HOC
PARAMETER MERGING
Joel Jang1,2Seungone Kim3Bill Yuchen Lin2Yizhong Wang1Jack Hessel2
Luke Zettlemoyer1Hannaneh Hajishirzi1,2Yejin Choi1,2Prithviraj Ammanabrolu4
1University of Washington2Allen Institute for AI3KAIST AI4UC San Diego
joeljang@cs.washington.edu
ABSTRACT
While Reinforcement Learning from Human Feedback (RLHF) aligns Large Lan-
guage Models (LLMs) with general, aggregate human preferences, it is subop-
timal for learning diverse, individual perspectives. In this work, we study Re-
inforcement Learning from Personalized Human Feedback (RL PHF) problem,
wherein LLMs are aligned to multiple (sometimes conflicting) preferences by
modeling alignment as a Multi-Objective Reinforcement Learning (MORL) prob-
lem. Compared to strong single-objective baselines, we show that we can achieve
personalized alignment by decomposing preferences into multiple dimensions.
These dimensions are defined based on personalizations that are declared as de-
sirable by the user. In this work, we show that they can be efficiently trained
independently in a distributed manner and combined effectively post-hoc through
parameter merging.1
1 I NTRODUCTION
Reinforcement Learning from Human Feedback (RLHF) (Nakano et al., 2021a; Ouyang et al.,
2022a; Bai et al., 2022a; Dubois et al., 2023; Bai et al., 2022b) typically optimizes a policy model
that receives training signals from a single reward model that aims to capture the general prefer-
ences of a population. In this work, we instead propose Reinforcement Learning from Personalized
Human Feedback (RL PHF), a new, multi-objective formulation of the human preference alignment
problem, where Large Language Models (LLMs) are trained to be efficiently aligned with a range
of different, potentially personalized combinations of human preferences.
We model RL PHF as a Multi-Objective Reinforcement Learning (MORL) problem, which allows
training the policy model with multiple, conflicting objectives since it aims to vary the importance
of each objective during inference. In existing RLHF formulations, pairwise human feedback is col-
lected by asking human annotators to choose which model response is generally better and is used
to train a general reward model. This makes implicit assumptions that may not hold for everyone.
For example, recent work has shown that LLMs aligned with RLHF prefer verbose output gener-
ations (Zheng et al., 2023; Dubois et al., 2023; Wang et al., 2023; Singhal et al., 2023). We aim
to support a wider range of multifaceted preferences that are explicitly declared as desirable by the
user—giving the user control over the facets of output text they want to see as well as the personal
data they wish to reveal to the model. We collect personalized human feedback corresponding to
multiple such dimensions, noting that they may also be conflicting in nature.
We first implement a strong MORL baseline called P ROMPTED -MORL where there are multiple
reward signals for each of the objectives (preferences) given via prompts during RL training. Next,
we propose P ERSONALIZED SOUPS , a method that circumvents simultaneously optimizing multiple
preferences by first optimizing multiple policy models each with distinct preferences with Proximal
Policy Optimization (PPO) and merging the parameters of the policy models whose preferences
we want to composite together on the fly during inference. This modular approach significantly
1Code: https://github.com/joeljang/RLPHF
1arXiv:2310.11564v1 [cs.CL] 17 Oct 2023 |
2401.05300.pdf | I am a Strange Dataset: Metalinguistic Tests for Language Models
Tristan Thrush§, Jared Moore§, Miguel Monares†‡, Christopher Potts§, Douwe Kiela§¶
§Stanford University; †UC San Diego; ‡Playtest AI; ¶Contextual AI
tthrush@stanford.edu
Abstract
Statements involving metalinguistic self-
reference (“This paper has six sections.”) are
prevalent in many domains. Can large language
models (LLMs) handle such language? In this
paper, we present “I am a Strange Dataset”,
a new dataset for addressing this question.
There are two subtasks: generation and
verification . In generation, models continue
statements like “The penultimate word in this
sentence is” (where a correct continuation is
“is”). In verification, models judge the truth
of statements like “The penultimate word in
this sentence is sentence.” (false). We also
provide minimally different metalinguistic
non-self-reference examples to complement
the main dataset by probing for whether
models can handle metalinguistic language
at all. The dataset is hand-crafted by experts
and validated by non-expert annotators. We
test a variety of open-source LLMs (7B to 70B
parameters) as well as closed-source LLMs
through APIs. All models perform close to
chance across both subtasks and even on the
non-self-referential metalinguistic control data,
though we find some steady improvement
with model scale. GPT 4 is the only model to
consistently do significantly better than chance,
and it is still only in the 60% range, while our
untrained human annotators score well in the
89–93% range. The dataset and evaluation
toolkit are available at https://github.com/
TristanThrush/i-am-a-strange-dataset .
1 Introduction
Self-reference plays a crucial role in the way we
think about mathematics (Gödel, 1931), theoretical
computer science (Church, 1936), recursive pro-
gramming (Hofstadter, 1979), philosophy (Tarski,
1931), understanding complex cases in hate speech
detection (Allan, 2017), aptitude tests (Propp,
1993), and comedy (Hofstadter, 1985). Some po-
sitions in the philosophy of mind consider self-
referential capabilities to be a key aspect of higher
P
No.if someone asks whether
this sentence has a capital
letter, the correct answer is
Figure 1: An example highlighting the challenge pre-
sented by our task. All models that we tested on our
dataset are close to chance-level.
intelligence or even consciousness (Hofstadter,
2007; Baars, 1993). Of course, self-reference is
also pervasive in how we communicate: at least
one paper you read today is bound to contain “In
this paper” (Thrush et al., 2024).
In this paper, we focus on metalinguistic self-
reference, the complex kind of self-reference in
which language is used to make claims about it-
self, as in “This sentence has five words” and “This
paper has six sections”.1Using such language in-
volves reasoning about metalinguistic properties
(counting words, naming parts of speech, etc.) and
resolving self-reference. Humans generally have
no trouble with such language, and may even enjoy
its playful and sometimes paradoxical nature (Hof-
stadter, 1979, 1985, 2007).
Recently, Large Language Models (LLMs) have
demonstrated striking cognitive capabilities (Rad-
ford et al., 2019; Brown et al., 2020; OpenAI, 2022,
2023; Anthropic, 2023; Touvron et al., 2023; Jiang
et al., 2023; Zhu et al., 2023). But do they have
the same mastery over metalinguistic self-reference
as we do? See Figure 1 for an example of the is-
sue that LLMs face. To help address this question,
we present a new task and dataset called “I am
a Strange Dataset”. We are inspired by Douglas
Hofstadter’s explorations of self-reference in lan-
guage (Hofstadter, 1979, 1985, 2007), and borrow
part of the name from one of his books: “I am a
Strange Loop” (Hofstadter, 2007).
1Sentences like “I am Douglas Hofstadter” are self-
referential but not metalinguistic in the sense of interest here.arXiv:2401.05300v1 [cs.CL] 10 Jan 2024 |
2306.00238.pdf | Bytes Are All You Need: Transformers Operating Directly On File Bytes
Maxwell Horton, Sachin Mehta, Ali Farhadi, Mohammad Rastegari
Apple
Abstract
Modern deep learning approaches usually transform
inputs into a modality-specific form. For example, the
most common deep learning approach to image classi-
fication involves decoding image file bytes into an RGB
tensor which is passed into a neural network. Instead,
we investigate performing classification directly on file
bytes, without the need for decoding files at inference
time. Using file bytes as model inputs enables the de-
velopment of models which can operate on multiple
input modalities. Our model, ByteFormer , achieves an
ImageNet Top-1 classification accuracy of 77.33% when
training and testing directly on TIFF file bytes using
a transformer backbone with configuration similar to
DeiT-Ti ( 72.2% accuracy when operating on RGB im-
ages). Without modifications or hyperparameter tuning,
ByteFormer achieves 95.42% classification accuracy when
operating on WAV files from the Speech Commands v2
dataset (compared to state-of-the-art accuracy of 98.7%).
Additionally, we demonstrate that ByteFormer has appli-
cations in privacy-preserving inference. ByteFormer is
capable of performing inference on particular obfuscated
input representations with no loss of accuracy. We also
demonstrate ByteFormer’s ability to perform inference
with a hypothetical privacy-preserving camera which
avoids forming full images by consistently masking 90%
of pixel channels, while still achieving 71.35% accu-
racy on ImageNet. Our code will be made available
at https://github.com/apple/ml-cvnets/
tree/main/examples/byteformer .
1. Introduction
Deep learning inference usually involves explicit model-
ing of the input modality. For example, Vision Transform-
ers (ViTs) [7] explicitly model the 2D spatial structure of
images by encoding image patches into vectors. Similarly,
audio inference often involves computing spectral features
(such as MFCCs [25]) to pass into a network [10, 18]. When
a user wants to perform inference on a file stored on disk
(e.g. a JPEG image file or an MP3 audio file), the user mustfirst decode the file into a modality-specific representation
(e.g. an RGB tensor or MFCCs), as in Figure 1a.
The practice of decoding inputs into a modality-specific
representation has two main drawbacks. First, it requires
hand-crafting an input representation and a model stem for
each input modality. Recent works such as PerceiverIO
[14] and UnifiedIO [24] have shown that Transformer back-
bones can be used for a variety of different tasks. However,
these methods still require modality-specific input prepro-
cessing. For instance, PerceiverIO decodes image files into
[H×W, C]tensors before passing them into the network.
Other modalities input to PerceiverIO are processed into
different forms. We hypothesize that it’s possible to remove
all modality-specific input preprocessing by performing in-
ference directly on file bytes.
The second drawback of decoding inputs into a
modality-specific representation is that it reduces privacy
by exposing the data being analyzed. Consider the case of a
smart-home device that performs inference on RGB images.
If an adversary accesses this model input, the user’s privacy
might be compromised. We hypothesize that inference can
instead be performed on privacy-preserving inputs.
To address these drawbacks, we note that a common
property of many input modalities is that they can be stored
as file bytes. Thus, we use file bytes (without any decoding)
as inputs to our model at inference time (Figure 1b). We
use a modified Transformer [39] architecture for our model,
given their ability to handle a variety of modalities [14, 24]
and variable-length inputs. We call our model ByteFormer.
We demonstrate the efficacy of ByteFormer on Ima-
geNet [6] classification, achieving 77.33% accuracy on files
stored in the TIFF format. Our model uses transformer
backbone hyperparameters chosen in DeiT-Ti [38] (which
achieves 72.2%accuracy on RGB inputs). We also demon-
strate strong results on PNG and JPEG files. Additionally,
we demonstrate that our classification model can achieve
95.8%accuracy on Speech Commands v2 [42], compara-
ble to state-of-the-art ( 98.7%) [18], without any architec-
ture changes or hyperparameter tuning .
Because ByteFormer can handle a variety of input rep-
resentations, we can also use it to operate on privacy-
preserving inputs. We demonstrate that we can remap in-arXiv:2306.00238v1 [cs.CV] 31 May 2023 |
2305.12387.pdf | Optimal Time Complexities of
Parallel Stochastic Optimization Methods
Under a Fixed Computation Model
Alexander Tyurin
KAUST
Saudi Arabia
alexandertiurin@gmail.comPeter Richt ´arik
KAUST
Saudi Arabia
richtarik@gmail.com
Abstract
Parallelization is a popular strategy for improving the performance of iterative
algorithms. Optimization methods are no exception: design of efficient parallel
optimization methods and tight analysis of their theoretical properties are important
research endeavors. While the minimax complexities are well known for sequential
optimization methods, the theory of parallel optimization methods is less explored.
In this paper, we propose a new protocol that generalizes the classical oracle frame-
work approach. Using this protocol, we establish minimax complexities for parallel
optimization methods that have access to an unbiased stochastic gradient oracle
with bounded variance. We consider a fixed computation model characterized
by each worker requiring a fixed but worker-dependent time to calculate stochas-
tic gradient. We prove lower bounds and develop optimal algorithms that attain
them. Our results have surprising consequences for the literature of asynchronous
optimization methods.
1 Introduction
We consider the nonconvex optimization problem
min
x∈Qn
f(x) :=Eξ∼D[f(x;ξ)]o
, (1)
where f:Rd×Sξ→R, Q⊆Rd,andξis a random variable with some distribution DonSξ.In
machine learning, Sξcould be the space of all possible data, Dis the distribution of the training
dataset, and f(·, ξ)is the loss of a data sample ξ.In this paper we address the following natural setup:
(i)nworkers are available to work in parallel,
(ii) the ithworker requires τiseconds1to calculate a stochastic gradient of f.
The function fisL–smooth and lower-bounded (see Assumptions 7.1–7.2), and stochastic gradients
are unbiased and σ2-variance-bounded (see Assumption 7.3).
1.1 Classical theory
In the nonconvex setting, gradient descent (GD) is an optimal method with respect to the number of
gradient ( ∇f) calls (Lan, 2020; Nesterov, 2018; Carmon et al., 2020) for finding an approximately
stationary point of f. Obviously, a key issue with GD is that it requires access to the exact gradients
1Or any other unit of time.
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2305.12387v2 [math.OC] 26 Nov 2023 |
2104.08821.pdf | SimCSE: Simple Contrastive Learning of Sentence Embeddings
Tianyu Gao†∗Xingcheng Yao‡∗Danqi Chen†
†Department of Computer Science, Princeton University
‡Institute for Interdisciplinary Information Sciences, Tsinghua University
{tianyug,danqic}@cs.princeton.edu
yxc18@mails.tsinghua.edu.cn
Abstract
This paper presents SimCSE, a simple con-
trastive learning framework that greatly ad-
vances state-of-the-art sentence embeddings.
We first describe an unsupervised approach,
which takes an input sentence and predicts
itself in a contrastive objective, with only
standard dropout used as noise. This simple
method works surprisingly well, performing
on par with previous supervised counterparts.
We find that dropout acts as minimal data aug-
mentation, and removing it leads to a repre-
sentation collapse. Then, we propose a super-
vised approach, which incorporates annotated
pairs from natural language inference datasets
into our contrastive learning framework by us-
ing “entailment” pairs as positives and “con-
tradiction” pairs as hard negatives. We evalu-
ate SimCSE on standard semantic textual simi-
larity (STS) tasks, and our unsupervised and
supervised models using BERT base achieve
an average of 76.3% and 81.6% Spearman’s
correlation respectively, a 4.2% and 2.2%
improvement compared to the previous best
results. We also show—both theoretically
and empirically—that the contrastive learning
objective regularizes pre-trained embeddings’
anisotropic space to be more uniform, and it
better aligns positive pairs when supervised
signals are available.1
1 Introduction
Learning universal sentence embeddings is a fun-
damental problem in natural language process-
ing and has been studied extensively in the litera-
ture (Kiros et al., 2015; Hill et al., 2016; Conneau
et al., 2017; Logeswaran and Lee, 2018; Cer et al.,
2018; Reimers and Gurevych, 2019, inter alia ).
In this work, we advance state-of-the-art sentence
*The first two authors contributed equally (listed in alpha-
betical order). This work was done when Xingcheng visited
the Princeton NLP group remotely.
1Our code and pre-trained models are publicly available at
https://github.com/princeton-nlp/SimCSE .embedding methods and demonstrate that a con-
trastive objective can be extremely effective when
coupled with pre-trained language models such as
BERT (Devlin et al., 2019) or RoBERTa (Liu et al.,
2019). We present SimCSE, a simplecontrastive
sentence embedding framework, which can pro-
duce superior sentence embeddings, from either
unlabeled or labeled data.
Ourunsupervised SimCSE simply predicts the
input sentence itself with only dropout (Srivastava
et al., 2014) used as noise (Figure 1(a)). In other
words, we pass the same sentence to the pre-trained
encoder twice : by applying the standard dropout
twice, we can obtain two different embeddings as
“positive pairs”. Then we take other sentences in
the same mini-batch as “negatives”, and the model
predicts the positive one among the negatives. Al-
though it may appear strikingly simple, this ap-
proach outperforms training objectives such as pre-
dicting next sentences (Logeswaran and Lee, 2018)
and discrete data augmentation (e.g., word dele-
tion and replacement) by a large margin, and even
matches previous supervised methods. Through
careful analysis, we find that dropout acts as mini-
mal “data augmentation” of hidden representations
while removing it leads to a representation collapse.
Oursupervised SimCSE builds upon the recent
success of using natural language inference (NLI)
datasets for sentence embeddings (Conneau et al.,
2017; Reimers and Gurevych, 2019) and incorpo-
rates annotated sentence pairs in contrastive learn-
ing (Figure 1(b)). Unlike previous work that casts
it as a 3-way classification task (entailment, neu-
tral, and contradiction), we leverage the fact that
entailment pairs can be naturally used as positive
instances. We also find that adding correspond-
ing contradiction pairs as hard negatives further
improves performance. This simple use of NLI
datasets achieves a substantial improvement com-
pared to prior methods using the same datasets.
We also compare to other labeled sentence-pairarXiv:2104.08821v4 [cs.CL] 18 May 2022 |
1912.02292.pdf | DEEPDOUBLE DESCENT :
WHERE BIGGER MODELS AND MORE DATA HURT
Preetum Nakkiran∗
Harvard UniversityGal Kaplun†
Harvard UniversityYamini Bansal†
Harvard UniversityTristan Yang
Harvard University
Boaz Barak
Harvard UniversityIlya Sutskever
OpenAI
ABSTRACT
We show that a variety of modern deep learning tasks exhibit a “double-descent”
phenomenon where, as we increase model size, performance first gets worse and
then gets better. Moreover, we show that double descent occurs not just as a
function of model size, but also as a function of the number of training epochs.
We unify the above phenomena by defining a new complexity measure we call
theeffective model complexity and conjecture a generalized double descent with
respect to this measure. Furthermore, our notion of model complexity allows us to
identify certain regimes where increasing (even quadrupling) the number of train
samples actually hurts test performance.
1 I NTRODUCTION
Figure 1: Left: Train and test error as a function of model size, for ResNet18s of varying width
on CIFAR-10 with 15% label noise. Right: Test error, shown for varying train epochs. All models
trained using Adam for 4K epochs. The largest model (width 64) corresponds to standard ResNet18.
The bias-variance trade-off is a fundamental concept in classical statistical learning theory (e.g.,
Hastie et al. (2005)). The idea is that models of higher complexity have lower bias but higher vari-
ance. According to this theory, once model complexity passes a certain threshold, models “overfit”
with the variance term dominating the test error, and hence from this point onward, increasing model
complexity will only decrease performance (i.e., increase test error). Hence conventional wisdom
in classical statistics is that, once we pass a certain threshold, “larger models are worse. ”
However, modern neural networks exhibit no such phenomenon. Such networks have millions of
parameters, more than enough to fit even random labels (Zhang et al. (2016)), and yet they perform
much better on many tasks than smaller models. Indeed, conventional wisdom among practitioners
is that “larger models are better’ ’ (Krizhevsky et al. (2012), Huang et al. (2018), Szegedy et al.
∗Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We espe-
cially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work. Correspondence
Email: preetum@cs.harvard.edu
†Equal contribution
1arXiv:1912.02292v1 [cs.LG] 4 Dec 2019 |
2401.10241.pdf | ZERO BUBBLE PIPELINE PARALLELISM
Penghui Qi∗, Xinyi Wan∗, Guangxing Huang & Min Lin
Sea AI Lab
{qiph,wanxy,huanggx,linmin }@sea.com
ABSTRACT
Pipeline parallelism is one of the key components for large-scale distributed train-
ing, yet its efficiency suffers from pipeline bubbles which were deemed inevitable.
In this work, we introduce a scheduling strategy that, to our knowledge, is the
first to successfully achieve zero pipeline bubbles under synchronous training se-
mantics. The key idea behind this improvement is to split the backward compu-
tation into two parts, one that computes gradient for the input and another that
computes for the parameters. Based on this idea, we handcraft novel pipeline
schedules that significantly outperform the baseline methods. We further de-
velop an algorithm that automatically finds an optimal schedule based on spe-
cific model configuration and memory limit. Additionally, to truly achieve zero
bubble, we introduce a novel technique to bypass synchronizations during the op-
timizer step. Experimental evaluations show that our method outperforms the
1F1B schedule up to 23% in throughput under a similar memory limit. This
number can be further pushed to 31% when the memory constraint is relaxed.
We believe our results mark a major step forward in harnessing the true po-
tential of pipeline parallelism. We open sourced our implementation based on
the popular Megatron-LM repository on https://github.com/sail-sg/
zero-bubble-pipeline-parallelism .
1 I NTRODUCTION
The realm of distributed model training has become a focal point in the deep learning community,
especially with the advent of increasingly large and intricate models. Training these behemoths
often requires a vast amount of GPUs interconnected with various topologies. Various parallelism
techniques have been proposed for training DNN in the past years. Data parallelism (DP) (Goyal
et al., 2017; Li et al., 2020) is the default strategy for models of small to moderate sizes due to
its simplicity. Beyond a model size, it is no longer possible to fit the model parameters in one
single GPU. This is when model parallelism comes to the rescue (Harlap et al., 2018; Huang et al.,
2019; Fan et al., 2021; Zheng et al., 2022). There are two main model parallel schemes, tensor
parallelism (TP) and pipeline parallelism (PP). TP splits the matrix multiplication in one layer to
several devices, while PP segments the entire model into different stages which can be processed
across different devices. Notably, ZeRO (Rajbhandari et al., 2020) provides a strong alternative to
model parallelism by sharding parameters across devices, while keeping the simplicity of DP.
Recent research indicates that achieving optimal performance in large-scale training scenarios re-
quires a non-trivial interaction of DP, TP and PP strategies. In the abundance of interconnection
resources, e.g. NVLink between GPUs within one compute node, a hybrid of DP, TP and ZeRO
strategies works efficiently. Whereas there are numerous empirical evidences Fan et al. (2021);
Zheng et al. (2022); Narayanan et al. (2021) showing PP is particularly advantageous for utilizing
cross-server connections, especially at the scale of thousands of GPUs. This highlights the primary
aim of our work: enhancing the efficiency of PP.
Going deeper into the intricacies of PP, the efficiency of its implementation relies heavily on the
amount of device idle time referred to as pipeline bubbles. Due to the dependency between lay-
ers, bubbles seem inevitable. A prominent early work to address this issue is GPipe (Huang et al.,
2019), which attempts to reduce the bubble ratio by increasing the number of concurrent batches
∗Equal Contributors
1arXiv:2401.10241v1 [cs.DC] 30 Nov 2023 |
2310.01352v4.pdf | Published as a conference paper at ICLR 2024
RA-DIT: R ETRIEVAL -AUGMENTED DUAL INSTRUC -
TION TUNING
Xi Victoria Lin∗Xilun Chen∗Mingda Chen∗
Weijia Shi Maria Lomeli Rich James Pedro Rodriguez Jacob Kahn
Gergely Szilvasy Mike Lewis Luke Zettlemoyer Scott Yih
FAIR at Meta
{victorialin,xilun,mingdachen,scottyih }@meta.com
ABSTRACT
Retrieval-augmented language models (RALMs) improve performance by access-
ing long-tail and up-to-date knowledge from external data stores, but are challeng-
ing to build. Existing approaches require either expensive retrieval-specific modi-
fications to LM pre-training or use post-hoc integration of the data store that leads
to suboptimal performance. We introduce Retrieval- Augmented DualInstruction
Tuning (RA-DIT), a lightweight fine-tuning methodology that provides a third
option by retrofitting any LLM with retrieval capabilities. Our approach operates
in two distinct fine-tuning steps: (1) one updates a pre-trained LM to better use
retrieved information, while (2) the other updates the retriever to return more rel-
evant results, as preferred by the LM. By fine-tuning over tasks that require both
knowledge utilization and contextual awareness, we demonstrate that each stage
yields significant performance improvements, and using both leads to additional
gains. Our best model, RA-DIT 65B, achieves state-of-the-art performance across
a range of knowledge-intensive zero- and few-shot learning benchmarks, signif-
icantly outperforming existing in-context RALM approaches by up to +8.9% in
0-shot setting and +1.4% in 5-shot setting on average.
1 I NTRODUCTION
Large language models (LLMs) excel as zero- and few-shot learners across various tasks (Brown
et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a;b; Anil et al., 2023; OpenAI, 2023).
However, because knowledge is represented only in the model parameters, they struggle to capture
long-tail knowledge (Tirumala et al., 2022; Sun et al., 2023) and require substantial resources to be
kept up-to-date (Miller, 2023). Retrieval-Augmented Language Modeling (RALM) integrates LLMs
with non-parametric information retrieval to overcome these limitations (Guu et al., 2020; Borgeaud
et al., 2022; Izacard et al., 2022b; Shi et al., 2023b; Ram et al., 2023). By explicitly decoupling
knowledge retrieval with the backbone language model, such architectures have exhibited superior
performance on knowledge intensive tasks such as open-domain question answering (Lewis et al.,
2020; Izacard et al., 2022b) and live chat interactions (Liu, 2022).
Existing RALM architectures focus on two high-level challenges: (i) enhancing the LLM’s capabil-
ity to incorporate retrieved knowledge (Lewis et al., 2020; Izacard et al., 2022b) and (ii) refining the
retrieval component to return more relevant content (Shi et al., 2023b; Izacard et al., 2022b). Previ-
ous work have also introduced retrieval capabilities at different stages of the model training process.
REALM (Guu et al., 2020) and RETRO (Borgeaud et al., 2022) opt for end-to-end pre-training , in-
corporating the retrieval component from the outset. Atlas (Izacard et al., 2022b) builds upon the T5
language model (Raffel et al., 2020), and continuosly pre-trains the framework over unsupervised
text. R EPLUG (Shi et al., 2023b) and In-Context RALM (Ram et al., 2023) combine off-the-shelf
LLMs with general-purpose retrievers, showing that these two components can be effectively fused
through the emergent in-context learning capbabilities of LLMs. However, extensive pre-training of
such architectures is expensive, and the off-the-shelf fusion approach also has limitations, particu-
larly as the LLMs are not inherently trained to incorporate retrieved content.
∗Equal contribution
1arXiv:2310.01352v4 [cs.CL] 6 May 2024 |
2304.09871.pdf | A Theory on Adam Instability in Large-Scale Machine Learning
Igor Molybog∗, Peter Albert, Moya Chen, Zachary DeVito,
David Esiobu, Naman Goyal, Punit Singh Koura, Sharan Narang,
Andrew Poulton, Ruan Silva, Binh Tang, Diana Liskovich, Puxin Xu, Yuchen Zhang,
Melanie Kambadur, Stephen Roller, Susan Zhang
Meta AI
April 26, 2023
Abstract
We present a theory for the previously unexplained divergent behavior noticed in the training of
large language models. We argue that the phenomenon is an artifact of the dominant optimization
algorithm used for training, called Adam. We observe that Adam can enter a state in which
the parameter update vector has a relatively large norm and is essentially uncorrelated with the
direction of descent on the training loss landscape, leading to divergence. This artifact is more
likely to be observed in the training of a deep model with a large batch size, which is the typical
setting of large-scale language model training. To argue the theory, we present observations from
the training runs of the language models of different scales: 7 billion, 30 billion, 65 billion, and
546 billion parameters.
1 Introduction
Training instability reported by Chowdhery et al. [2022] is an interesting phenomenon that has only
been reported for the large language models trained on an order of a trillion tokens, posing a threat to
further scaling of the AI systems. Chowdhery et al. [2022] have observed dozens of spikes in the loss
curve throughout training. To mitigate the issue, they re-started training from a checkpoint roughly
100 steps before the spike started, and skipped roughly 200–500 data batches, in order to exclude
batches that were seen right before and during the spike. In that case, the spike of the loss value did
not repeat. The spikes were also not observed when the skipped data was fed through the model again
after the aforementioned mitigation, which implies that the data itself did not cause the spike, but
rather an interference of the data batch with the state of the model training run. The purpose of this
work is to rigorously reproduce the experiment with a different hardware and software setup, come
up with an explanation for the observed behavior supported by empirical evidence and theoretical
arguments, and propose alternative ways of mitigating the issue.
Loss spikes are difficult to study because any reproduction of these spikes at a smaller scale is
not necessarily caused by or remediated by the same factors as in larger scales. We therefore analyze
large-scale language modeling experiments, training four models between 7 billion and 546 billion
parameters. The models are decoder-only transformers [Brown et al., 2020, Smith et al., 2022] with
different depth and embedding dimensions and trained using the AdamW [Loshchilov and Hutter,
2017] algorithm with a linear learning rate schedule. Comparing to the modified Adafactor [Shazeer
and Stern, 2018] used by Chowdhery et al. [2022], we did not use the ”parameter scaling”, β2build-up
or the dynamic weight decay. This did not critically change the observed training instabilities. We
also made modifications in the architecture relative to the setup of Chowdhery et al. [2022] so that
the phenomenon we reproduce are robust to some changes in the specifics of model architectures. For
example, we used the ReLU activation function like Zhang et al. [2022] instead of SwiGLU Shazeer
[2020], and absolute learned positional embeddings instead of RoPE Su et al. [2021]. The settings
of each training run that are important in the context of our analysis are displayed in Table 1. We
cross-checked our results with the models trained using a different codebase and a different dataset,
∗igormolybog@meta.com
1arXiv:2304.09871v2 [cs.LG] 25 Apr 2023 |
2024.02.27.582234v2.full.pdf | Sequence modeling and design
from molecular to genome scale with Evo
Eric Nguyen∗,1,2, Michael Poli∗,3, Matthew G. Durrant∗,2,
Armin W. Thomas1, Brian Kang1, Jeremy Sullivan2,
Madelena Y. Ng1, Ashley Lewis1, Aman Patel1, Aaron Lou1,
Stefano Ermon1,4, Stephen A. Baccus1, Tina Hernandez-Boussard1, Christopher Ré1,
Patrick D. Hsu†,2,5, and Brian L. Hie†,1,2
1Stanford University,2Arc Institute,3TogetherAI,4CZ Biohub,5University of California, Berkeley
Abstract
The genome is a sequence that completely encodes the DNA, RNA, and proteins that orchestrate the function
of a whole organism. Advances in machine learning combined with massive datasets of whole genomes could
enable a biological foundation model that accelerates the mechanistic understanding and generative design
of complex molecular interactions. We report Evo, a genomic foundation model that enables prediction and
generation tasks from the molecular to genome scale. Using an architecture based on advances in deep
signal processing, we scale Evo to 7billion parameters with a context length of 131kilobases (kb) at single-
nucleotide, byte resolution. Trained on 2.7M prokaryotic and phage genomes, Evo can generalize across
the three fundamental modalities of the central dogma of molecular biology to perform zero-shot function
prediction that is competitive with, or outperforms, leading domain-specific language models. Evo also excels
at multi-element generation tasks, which we demonstrate by generating synthetic CRISPR-Cas molecular
complexes and entire transposable systems for the first time. Using information learned over whole genomes,
Evo can also predict gene essentiality at nucleotide resolution and can generate coding-rich sequences up to
650kb in length, orders of magnitude longer than previous methods. Advances in multi-modal and multi-
scalelearningwithEvoprovidesapromisingpathtowardimprovingourunderstandingandcontrolofbiology
across multiple levels of complexity.
1. Introduction
DNA is the fundamental layer of biological information that is responsible for transmitting the results of evo-
lution across generations of life (Morgan, 1910;Watson and Crick, 1953;Nirenberg and Matthaei, 1961).
Evolutionary variation in genome sequences is a reflection of adaptation and selection for biological function
at the phenotypic level (Dobzhansky, 1951). Rapid advances in DNA sequencing technologies have enabled
the systematic mapping of this evolutionary diversity at the whole-genome scale.
A machine that learns this breadth of information across genomes could model the function of DNA, RNA,
and proteins, as well as their diverse interactions that orchestrate complex biological functions, mediate dis-
ease, or create a complete organism. Modern machine learning algorithms combined with massive datasets of
genomic sequences could enable a general biological foundation model that learns the intrinsic logic of whole
genomes.
However, current efforts to model molecular biology with machine learning have been focused on creating
modality-specific models that are specialized to proteins, regulatory DNA, or RNA (Jumper et al., 2021;Rives
et al.,2021;Avsec et al., 2021;Theodoris et al., 2023). In addition, generative applications in biology have
been limited to the design of single molecules, simple complexes (Watson et al., 2023;Madani et al., 2023;
∗Equal contribution. †Corresponding author. B.L.H. (brianhie@stanford.edu); P.D.H. (patrick@arcinstitute.org).
1. CC-BY 4.0 International license made available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is The copyright holder for this preprint this version posted March 6, 2024. ; https://doi.org/10.1101/2024.02.27.582234doi: bioRxiv preprint |
2203.12644.pdf | Linearizing Transformer with Key-Value Memory
Yizhe Zhang∗
Meta AI
yizhezhang@fb.comDeng Cai∗
The Chinese University of Hong Kong
thisisjcykcd@gmail.com
Abstract
Efficient transformer variants with linear time
complexity have been developed to mitigate
the quadratic computational overhead of the
vanilla transformer. Among them are low-
rank projection methods such as Linformer
and kernel-based Transformers. Despite their
unique merits, they usually suffer from a per-
formance drop comparing with the vanilla
transformer on many sequence generation
tasks, and often fail to obtain computation
gain when the generation is short. We pro-
pose MemSizer, an approach towards closing
the performance gap while improving the effi-
ciency even with short generation. It projects
the source sequences into lower dimension
representations like Linformer, while enjoy-
ing efficient recurrent-style incremental com-
putation similar to kernel-based transformers.
This yields linear computation time and con-
stant memory complexity at inference time.
MemSizer also employs a lightweight multi-
head mechanism which renders the compu-
tation as light as a single-head model. We
demonstrate that MemSizer provides an im-
proved balance between efficiency and accu-
racy over the vanilla transformer and other
efficient transformer variants in three typi-
cal sequence generation tasks, including ma-
chine translation, abstractive text summariza-
tion, and language modeling.
1 Introduction
Transformer (Vaswani et al., 2017) has become the
de facto standard for almost all NLP tasks across
the board. At the core of the vanilla transformer
is the attention mechanism that captures the in-
teractions between feature vectors at different po-
sitions in a sequence. Despite its great success,
the vanilla transformer models are typically com-
putationally expensive as the computation of the
attention mechanism scales quadratically with the
∗Equal contribution.sequence length. This bottleneck limits the effi-
cient deployment of large-scale pre-trained models,
such as GPT-3 (Brown et al., 2020), Image Trans-
former (Parmar et al., 2018), Codex (Chen et al.,
2021) and DALL-E (Ramesh et al., 2021). Training
and deploying such gigantic transformer models
can be prohibitively difficult for scenarios with
limited resource budgets and may result in huge
energy consumption and greenhouse gas emission
(Strubell et al., 2019; Schwartz et al., 2020).
A number of transformer variants have been
proposed to reduce the computational overhead
(Tay et al., 2020c). One family of methods lever-
ages low-rank projections to reduce the number
of pair-wise interactions ( i.e., the size of attention
matrices) (Wang et al., 2020; Xiong et al., 2021;
Tay et al., 2020a). These methods first project
the input sequence into a low-resolution represen-
tation. For example, Wang et al. (2020) project
the length dimension to a fixed feature dimension.
Nevertheless, these methods have difficulties mod-
eling variable-length sequences and autoregressive
(causal) attention, impeding their applications in
sequence generation tasks. Recent works propose
to approximate the softmax attention through ker-
nelization (Katharopoulos et al., 2020; Peng et al.,
2021; Choromanski et al., 2021; Kasai et al., 2021).
For sequence generation tasks, these works can
cache computation in a recurrent manner, leading
to constant memory complexity in sequence length
during inference. Despite the improved efficiency
in long-form generation, the computation gain of
these kernel-based approaches vanishes when the
generation is as short as a typical sentence length.
Additionally, they usually suffer from a perfor-
mance loss when training from scratch (Kasai et al.,
2021).
In this work, we propose an approach called
MemSizer, an efficient transformer variant which
follows the paradigm of low-rank projections while
enjoying memory-efficient recurrent-style genera-arXiv:2203.12644v4 [cs.CL] 13 Oct 2022 |
1909.05215.pdf | Published as a conference paper at ICLR 2020
RECONSTRUCTING CONTINUOUS DISTRIBUTIONS OF
3D PROTEIN STRUCTURE FROM CRYO -EM IMAGES
Ellen D. Zhong
MIT
zhonge@mit.eduTristan Bepler
MIT
tbepler@mit.eduJoseph H. Davis∗
MIT
jhdavis@mit.eduBonnie Berger∗
MIT
bab@mit.edu
ABSTRACT
Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the
structure of proteins and other macromolecular complexes at near-atomic resolution.
In single particle cryo-EM, the central problem is to reconstruct the 3D structure of
a macromolecule from 104−7noisy and randomly oriented 2D projection images.
However, the imaged protein complexes may exhibit structural variability, which
complicates reconstruction and is typically addressed using discrete clustering
approaches that fail to capture the full range of protein dynamics. Here, we
introduce a novel method for cryo-EM reconstruction that extends naturally to
modeling continuous generative factors of structural heterogeneity. This method
encodes structures in Fourier space using coordinate-based deep neural networks,
and trains these networks from unlabeled 2D cryo-EM images by combining
exact inference over image orientation with variational inference for structural
heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can
perform ab initio reconstruction of 3D protein complexes from simulated and real
2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-
based approach for cryo-EM reconstruction and the first end-to-end method for
directly reconstructing continuous ensembles of protein structures from cryo-EM
images.
1 I NTRODUCTION
Cryo-electron microscopy (cryo-EM) is a Nobel Prize-winning technique capable of determining the
structure of proteins and macromolecular complexes at near-atomic resolution. In a single particle
cryo-EM experiment, a purified solution of the target protein or biomolecular complex is frozen in
a thin layer of vitreous ice and imaged at sub-nanometer resolution using an electron microscope.
After initial preprocessing and segmentation of the raw data, the dataset typically comprises 104−7
noisy projection images. Each image contains a separate instance of the molecule, recorded as the
molecule’s electron density integrated along the imaging axis (Figure 1). A major bottleneck in
cryo-EM structure determination is the computational task of 3D reconstruction, where the goal is
to solve the inverse problem of learning the structure, i.e. the 3D electron density volume, which
gave rise to the projection images. Unlike classic tomographic reconstruction (e.g. MRI), cryo-
EM reconstruction is complicated by the unknown orientation of each copy of the molecule in the
ice. Furthermore, cryo-EM reconstruction algorithms must handle challenges such as an extremely
low signal to noise ratio (SNR), unknown in-plane translations, imperfect signal transfer due to
microscope optics, and discretization of the measurements. Despite these challenges, continuing
advances in hardware and software have enabled structure determination at near-atomic resolution
forrigid proteins (Kühlbrandt (2014); Scheres (2012b); Renaud et al. (2018); Li et al. (2013)).
Many proteins and other biomolecules are intrinsically flexible and undergo large conformational
changes to perform their function. Since each cryo-EM image contains a unique instance of the
molecule of interest, cryo-EM has the potential to resolve structural heterogeneity, which is experi-
mentally infeasible with other structural biology techniques such as X-ray crystallography. However,
this heterogeneity poses a substantial challenge for reconstruction as each image is no longer of the
same structure. Traditional reconstruction algorithms address heterogeneity with discrete clustering
∗Corresponding authors
1arXiv:1909.05215v3 [q-bio.QM] 15 Feb 2020 |
2104.08663v2.pdf | BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of
Information Retrieval Models
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych
Ubiquitous Knowledge Processing Lab (UKP-TUDA)
Department of Computer Science, Technische Universität Darmstadt
www.ukp.tu-darmstadt.de
Abstract
Neural IR models have often been studied in
homogeneous and narrow settings, which has
considerably limited insights into their gen-
eralization capabilities. To address this, and
to allow researchers to more broadly estab-
lish the effectiveness of their models, we in-
troduce BEIR (Benchmarking IR), a hetero-
geneous benchmark for information retrieval.
We leverage a careful selection of 17 datasets
for evaluation spanning diverse retrieval tasks
including open-domain datasets as well as nar-
row expert domains. We study the effective-
ness of nine state-of-the-art retrieval models
in azero-shot evaluation setup on BEIR , find-
ing that performing well consistently across all
datasets is challenging.
Our results show BM25 is a robust baseline
andReranking -based models overall achieve
the best zero-shot performances, however, at
high computational costs. In contrast, Dense -
retrieval models are computationally more effi-
cient but often underperform other approaches,
highlighting the considerable room for im-
provement in their generalization capabilities.
In this work, we extensively analyze different
retrieval models and provide several sugges-
tions that we believe may be useful for future
work. BEIR datasets and code are available at
https://github.com/UKPLab/beir .
1 Introduction
Many real-world NLP problems rely on a practi-
cal and efficient retrieval component as a first step
to find relevant information. Examples are open-
domain question-answering (Chen et al., 2017),
claim-verification (Thorne et al., 2018), and dupli-
cate question detection (Zhang et al., 2015). Tra-
ditionally, retrieval has been dominated by lexical
approaches like TF-IDF or BM25 (Robertson and
Zaragoza, 2009). However, these approaches suffer
from what is known as lexical gap (Berger et al.,
2000) and only retrieve documents that containthe keywords also present within the query. Fur-
ther, queries and documents are treated in a bag-of-
words manner which does not take word ordering
into consideration.
Recently, deep learning and in particular pre-
trained Transformer models like BERT (Devlin
et al., 2018) have became popular in the infor-
mation retrieval space (Lin et al., 2020). They
overcome the lexical gap by mapping queries and
documents to a dense vector (Guo et al., 2016; Lee
et al., 2019; Karpukhin et al., 2020; Guu et al.,
2020; Gao et al., 2020; Liang et al., 2020; Ma
et al., 2021). The relevant documents for a given
query are then retrieved using (approximate) near-
est neighbor search (Johnson et al., 2017).
Another widely used approach involves re-
ranking documents from the output of a first-stage
retrieval system (Nogueira et al., 2019a, 2020;
Nogueira and Cho, 2020; Khattab and Zaharia,
2020). While dense retrieval approaches try to
overcome the (potential) lexical gap, re-ranking
approaches aim to create a better comparison of
the retrieved documents. Different approaches can
also be combined together (Ding et al., 2020; Gao
et al., 2020; Luan et al., 2021).
Previous approaches were commonly trained on
rather large datasets like the Natural Questions
(NQ) dataset (Kwiatkowski et al., 2019) contain-
ing around 133k training examples or the MS-
MARCO dataset (Nguyen et al., 2016) with more
than 500k training examples. Existing approaches
have been shown to perform well when evaluated
in-domain or for similar tasks (Nogueira and Cho,
2020; Karpukhin et al., 2020; Ding et al., 2020).
However, large training corpora are not available
for most tasks and domains. As creating a large
training corpus can be expensive, it is not feasible
to create such for most tasks and domains. Hence,
in most scenarios, we apply retrieval models in a
zero-shot setup, i.e. pre-trained models are applied
out-of-the-box across new tasks and domains. InarXiv:2104.08663v2 [cs.IR] 28 Apr 2021 |
2402.08609.pdf | 2024-2-14
Mixtures of Experts Unlock Parameter Scaling
for Deep RL
Johan Obando-Ceron*,1,2,3, Ghada Sokar*,1, Timon Willi*,4, Clare Lyle1, Jesse Farebrother1,2,5, Jakob
Foerster4, Gintare Karolina Dziugaite1,2,5, Doina Precup1,2,5and Pablo Samuel Castro1,2,3
*Equal contributions,1Google DeepMind,2Mila - Québec AI Institute,3Université de Montréal,4University of Oxford,5McGill
University
The recent rapid progress in (self) supervised learning models is in large part predicted by empirical
scaling laws: a model’s performance scales proportionally to its size. Analogous scaling laws remain
elusive for reinforcement learning domains, however, where increasing the parameter count of a model
often hurts its final performance. In this paper, we demonstrate that incorporating Mixture-of-Expert
(MoE) modules, and in particular Soft MoE s (Puigcerver et al., 2023), into value-based networks results
in more parameter-scalable models, evidenced by substantial performance increases across a variety of
training regimes and model sizes. This work thus provides strong empirical evidence towards developing
scaling laws for reinforcement learning.
1. Introduction
Deep Reinforcement Learning (RL) – the com-
bination of reinforcement learning algorithms
with deep neural networks – has proven effective
at producing agents that perform complex tasks
at super-human levels (Bellemare et al., 2020;
Berneretal.,2019;Fawzietal.,2022;Mnihetal.,
2015; Vinyals et al., 2019). While deep networks
are critical to any successful application of RL in
complex environments, their design and learning
dynamics in RL remain a mystery. Indeed, recent
work highlights some of the surprising phenom-
ena that arise when using deep networks in RL,
often going against the behaviours observed in
supervised learning settings (Ceron et al., 2023;
Graesser et al., 2022; Kumar et al., 2021a; Lyle
etal.,2022a;Nikishinetal.,2022;Ostrovskietal.,
2021; Sokar et al., 2023).
The supervised learning community convinc-
ingly showed that larger networks result in im-
proved performance, in particular for language
models (Kaplan et al., 2020). In contrast, re-
cent work demonstrates that scaling networks
in RL is challenging and requires the use of so-
phisticated techniques to stabilize learning, such
as supervised auxiliary losses, distillation, and
pre-training (Farebrother et al., 2022; Schwarzer
et al., 2023; Taiga et al., 2022). Furthermore,
deep RL networks are under-utilizing their pa-rameters, which may account for the observed
difficulties in obtaining improved performance
fromscale(Kumaretal.,2021a;Lyleetal.,2022a;
Sokar et al., 2023). Parameter count cannot be
scaled efficiently if those parameters are not used
effectively.
Architectural advances, such as transformers
(Vaswani et al., 2017), adapters (Houlsby et al.,
2019), and Mixtures of Experts (MoEs; Shazeer
et al., 2017), have been central to the scaling
properties of supervised learning models, espe-
cially in natural language and computer vision
problem settings. MoEs, in particular, are cru-
cial to scaling networks to billions (and recently
trillions) of parameters, because their modular-
ity combines naturally with distributed computa-
tionapproaches(Fedusetal.,2022). Additionally,
MoEs induce structured sparsity in a network, and
certain types of sparsity have been shown to im-
prove network performance (Evci et al., 2020;
Gale et al., 2019).
In this paper, we explore the effect of mix-
ture of experts on the parameter scalability of
value-based deep RL networks, i.e., does perfor-
mance increase as we increase the number of
parameters? We demonstrate that incorporating
Soft MoE s (Puigcerver et al., 2023) strongly im-
provestheperformanceofvariousdeepRLagents,
and performance improvements scale with the
Corresponding author(s): psc@google.com
©2024 Google DeepMind. All rights reservedarXiv:2402.08609v1 [cs.LG] 13 Feb 2024 |
2306.17563.pdf | arXiv:2306.17563v1 [cs.IR] 30 Jun 2023Preprint
LARGE LANGUAGE MODELS ARE EFFECTIVE TEXT
RANKERS WITH PAIRWISE RANKING PROMPTING
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, J iaming Shen,
Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michae l Bendersky
Google Research
{zhenqin,jagerman,kaihuibj,hlz,junru,jmshen,tianqil iu,jialu,
metzler,xuanhui,bemike}@google.com
ABSTRACT
Ranking documents using Large Language Models (LLMs) by dir ectly feeding
the query and candidate documents into the prompt is an inter esting and prac-
tical problem. However, there has been limited success so fa r, as researchers
have found it difficult to outperform fine-tuned baseline ran kers on benchmark
datasets. We analyze pointwise and listwise ranking prompt s used by existing
methods and argue that off-the-shelf LLMs do not fully under stand these rank-
ing formulations, possibly due to the nature of how LLMs are t rained. In this
paper, we propose to significantly reduce the burden on LLMs b y using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking perfor mance on standard bench-
marks using moderate-sized open-sourced LLMs. On TREC-DL2 020, PRP based
on the Flan-UL2 model with 20B parameters outperforms the pr evious best ap-
proach in the literature, which is based on the blackbox comm ercial GPT-4 that
has 50x (estimated) model size, by over 5% at NDCG@1. On TREC- DL2019,
PRP is only inferior to the GPT-4 solution on the NDCG@5 and ND CG@10 met-
rics, while outperforming other existing solutions, such a s InstructGPT which has
175B parameters, by over 10% for nearly all ranking metrics. Furthermore, we
propose several variants of PRP to improve efficiency and sho w that it is possible
to achieve competitive results even with linear complexity . We also discuss other
benefits of PRP, such as supporting both generation and scori ng LLM APIs, as
well as being insensitive to input ordering.
1 I NTRODUCTION
Large Language Model (LLMs) such as GPT-3 (Brown et al., 2020 ) and PaLM (Chowdhery et al.,
2022) have demonstrated impressive performance on a wide ra nge of natural language tasks, achiev-
ing comparable or better performance when compared with the ir supervised counterparts that are
potentially trained with millions of labeled examples, eve n in the zero-shot setting (Kojima et al.,
2022; Agrawal et al., 2022; Huang et al., 2022; Hou et al., 202 3).
However, there is limited success for the important text ran king problem using LLMs (Ma et al.,
2023). Existing results usually significantly underperfor m well-trained baseline rankers (e.g.,
Nogueira et al. (2020); Zhuang et al. (2023)). The only excep tion is a recent approach proposed
in (Sun et al., 2023), which depends on the blackbox, giant, a nd commercial GPT-4 system. Besides
the technical concerns such as sensitivity to input order (r anking metrics can drop by more than
50% when the input document order changes), we argue that rel ying on such blackbox systems is
not ideal for academic researchers due to significant cost co nstraints and access limitations to these
systems, though we do acknowledge the value of such explorat ions in showing the capacity of LLMs
for ranking tasks.
In this work, we first discuss why it is difficult for LLMs to per form ranking tasks with existing
methods, specifically, the pointwise and listwise formulat ions. For pointwise approaches, ranking
requires LLMs to output calibrated prediction probabiliti es before sorting, which is known to be
very difficult and is not supported by the generation only LLM APIs (such as GPT-4). For listwise
approaches, even with instructions that look very clear to h umans, LLMs can frequently generate
1 |
2310.07096.pdf | Sparse Universal Transformer
Shawn Tan1 *
tanjings@mila.quebecYikang Shen2 *
yikang.shen@ibm.com
Zhenfang Chen2
zfchen@ibm.comAaron Courville1
courvila@iro.umontreal.caChuang Gan2
chuangg@ibm.com
1Mila, University of Montreal2MIT-IBM Watson AI Lab
Abstract
The Universal Transformer (UT) is a variant of
the Transformer that shares parameters across
its layers. Empirical evidence shows that UTs
have better compositional generalization than
Vanilla Transformers (VTs) in formal language
tasks. The parameter-sharing also affords it
better parameter efficiency than VTs. Despite
its many advantages, scaling UT parameters
is much more compute and memory intensive
than scaling up a VT. This paper proposes the
Sparse Universal Transformer (SUT), which
leverages Sparse Mixture of Experts (SMoE)
and a new stick-breaking-based dynamic halt-
ing mechanism to reduce UT’s computation
complexity while retaining its parameter effi-
ciency and generalization ability. Experiments
show that SUT achieves the same performance
as strong baseline models while only using
half computation and parameters on WMT’14
and strong generalization results on formal lan-
guage tasks (Logical inference and CFQ). The
new halting mechanism also enables around
50% reduction in computation during inference
with very little performance decrease on formal
language tasks.
1 Introduction
Recent theoretical work has pointed out that finite-
depth Transformers have an issue of expressibility
that will result in failure to generalize (Hahn, 2020;
Hao et al., 2022; Merrill et al., 2022; Liu et al.,
2022). Delétang et al. (2022) ran several neural
architectures on a suite of different synthetic lan-
guages generated from different levels of the Chom-
sky hierarchy and empirically confirmed these re-
sults, showing that VTs have difficulty generaliz-
ing to Regular languages. Universal Transformers
(UTs; Dehghani et al. 2018) are Transformers that
share parameters at every layer of the architecture.
Csordás et al. (2021) performed several composi-
tional generalization experiments on VTs and UTs
along with absolute and relative position embed-
Figure 1: A VT has separate Transformer blocks for
each layer, with different parameters. For a UT with
the same number of parameters, the UT block will be
∼3 times the dimensions of each VT block. Running
this block for 3 layers would then incur approximately
9 times the runtime memory. Using SMoEs can recover
approximately the same computational cost as the VT.
dings, and showed that UTs with relative positional
embeddings performed better on these tasks.
However, the task of scaling UTs is challenging
due to its computation complexity (Kaplan et al.,
2020; Tay et al., 2022; Takase and Kiyono, 2021).
Consider a VT with Pparameters for each layer
andLlayers. Evaluating such a VT has compu-
tation complexity associated with the model size
LP. A size-equivalent UT would have a UT block
withLPparameters and computation complexity
of approximately LPto run the block once. To run
such a UT for equivalent Llayers would incur a
complexity of L2P. This increased computation
complexity directly translates to increased train-
ing and inference time. According to Takase and
Kiyono (2021), UT requires two times the training
time and far more GPU memory than VT in WMT
English-German translation task.
Sparsely activated neural networks were intro-
duced to reduce the computation complexity ofarXiv:2310.07096v1 [cs.CL] 11 Oct 2023 |
1502.05767.pdf | arXiv:1502.05767v4 [cs.SC] 5 Feb 2018Automatic Differentiation
in Machine Learning: a Survey
Atılım G¨ une¸ s Baydin gunes@robots.ox.ac.uk
Department of Engineering Science
University of Oxford
Oxford OX1 3PJ, United Kingdom
Barak A. Pearlmutter barak@pearlmutter.net
Department of Computer Science
National University of Ireland Maynooth
Maynooth, Co. Kildare, Ireland
Alexey Andreyevich Radul axch@mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139, United States
Jeffrey Mark Siskind qobi@purdue.edu
School of Electrical and Computer Engineering
Purdue University
West Lafayette, IN 47907, United States
Abstract
Derivatives, mostly in the form of gradients and Hessians, are ubiqu itous in machine learn-
ing. Automatic differentiation (AD), also called algorithmic differentiat ion or simply “auto-
diff”, is a family of techniques similar to but more general than backpr opagation for effi-
ciently and accurately evaluating derivatives of numeric functions e xpressed as computer
programs. AD is a small but established field with applications in areas in cluding compu-
tational fluid dynamics, atmospheric sciences, and engineering des ign optimization. Until
very recently, the fields of machine learning and AD have largely been unaware of each
other and, in some cases, have independently discovered each oth er’s results. Despite its
relevance, general-purpose AD has been missing from the machine le arning toolbox, a situ-
ation slowly changing with its ongoing adoption under the names “dyna mic computational
graphs” and “differentiable programming”. We survey the intersec tion of AD and machine
learning, cover applications where AD has direct relevance, and add ress the main imple-
mentation techniques. By precisely defining the main differentiation t echniques and their
interrelationships, we aim to bring clarity to the usage of the terms “ autodiff”, “automatic
differentiation”, and “symbolic differentiation” as these are encoun tered more and more in
machine learning settings.
Keywords: Backpropagation, Differentiable Programming |
1611.03852v3.pdf | arXiv:1611.03852v3 [cs.LG] 25 Nov 2016A ConnectionBetweenGenerativeAdversarial
Networks,InverseReinforcementLearning,and
Energy-BasedModels
ChelseaFinn∗, PaulChristiano∗, PieterAbbeel, SergeyLevine
UniversityofCalifornia,Berkeley
{cbfinn,paulfchristiano,pabbeel,svlevine}@eecs.berk eley.edu
Abstract
Generative adversarial networks (GANs) are a recently prop osedclass of genera-
tivemodelsinwhichageneratoristrainedtooptimizeacost functionthatisbeing
simultaneously learned by a discriminator. While the idea o f learning cost func-
tionsis relativelynew to the field of generativemodeling,l earningcosts has long
been studied in control and reinforcement learning (RL) dom ains, typically for
imitationlearningfromdemonstrations. Inthese fields, le arningthe costfunction
underlyingobservedbehaviorisknownasinversereinforce mentlearning(IRL)or
inverseoptimalcontrol. While atfirst theconnectionbetwe encost learninginRL
and cost learning in generative modeling may appear to be a su perficial one, we
showin thispaperthatcertainIRL methodsarein fact mathem aticallyequivalent
to GANs. In particular, we demonstrate an equivalence betwe en a sample-based
algorithmformaximumentropyIRLandaGAN inwhichthegener ator’sdensity
canbe evaluatedandis providedasan additionalinputto the discriminator. Inter-
estingly, maximum entropy IRL is a special case of an energy- based model. We
discusstheinterpretationofGANsasanalgorithmfortrain ingenergy-basedmod-
els, andrelate thisinterpretationtootherrecentworktha tseekstoconnectGANs
and EBMs. By formally highlighting the connection between G ANs, IRL, and
EBMs, we hope that researchers in all three communities can b etter identify and
apply transferable ideas from one domain to another, partic ularly for developing
morestable andscalablealgorithms: a majorchallengein al l threedomains.
1 Introduction
Generativeadversarialnetworks(GANs)arearecentlyprop osedclassofgenerativemodelsinwhich
a generatoris trainedto optimizea cost functionthat is bei ngsimultaneouslylearnedby a discrimi-
nator[8]. While the ideaoflearningobjectivesisrelative lynewto thefield ofgenerativemodeling,
learningcostorrewardfunctionshaslongbeenstudiedinco ntrol[5]andwaspopularizedin2000for
reinforcementlearningproblems[15]. In these fields, lear ningthe cost functionunderlyingdemon-
strated behavior is referred to as inverse reinforcement le arning (IRL) or inverse optimal control
(IOC). At first glance, the connectionbetween cost learning in RL and cost learning for generative
models may appear to be superficial; however, if we apply GANs to a setting where the generator
density can be efficiently evaluated, the result is exactly e quivalent to a sample-based algorithm
for maximum entropy (MaxEnt) IRL. Interestingly, as MaxEnt IRL is an energy-basedmodel, this
connectionsuggestsa methodforusingGANstotraina broade rclassofenergy-basedmodels.
MaxEnt IRL is a widely-used objective for IRL, proposed by Zi ebart et al. [27]. Sample-based
algorithms for performingmaximum entropy (MaxEnt) IRL hav e scaled cost learning to scenarios
∗Indicates equal contribution. |
1601.00670.pdf | Variational Inference: A Review for Statisticians
David M. Blei
Department of Computer Science and Statistics
Columbia University
Alp Kucukelbir
Department of Computer Science
Columbia University
Jon D. McAuliffe
Department of Statistics
University of California, Berkeley
May 11, 2018
Abstract
One of the core problems of modern statistics is to approximate difficult-to-compute
probability densities. This problem is especially important in Bayesian statistics, which
frames all inference about unknown quantities as a calculation involving the posterior
density. In this paper, we review variational inference ( VI), a method from machine
learning that approximates probability densities through optimization. VIhas been used
in many applications and tends to be faster than classical methods, such as Markov chain
Monte Carlo sampling. The idea behind VIis to first posit a family of densities and then
to find the member of that family which is close to the target. Closeness is measured
by Kullback-Leibler divergence. We review the ideas behind mean-field variational
inference, discuss the special case of VIapplied to exponential family models, present
a full example with a Bayesian mixture of Gaussians, and derive a variant that uses
stochastic optimization to scale up to massive data. We discuss modern research in VIand
highlight important open problems. VIis powerful, but it is not yet well understood. Our
hope in writing this paper is to catalyze statistical research on this class of algorithms.
Keywords: Algorithms; Statistical Computing; Computationally Intensive Methods.
1arXiv:1601.00670v9 [stat.CO] 9 May 2018 |
2310.17722.pdf | LARGE LANGUAGE MODELS AS GENERALIZABLE
POLICIES FOR EMBODIED TASKS
Andrew Szot, Max Schwarzer, Harsh Agrawal, Bogdan Mazoure, Walter Talbott
Katherine Metcalf, Natalie Mackraz, Devon Hjelm, Alexander Toshev
Apple
ABSTRACT
We show that large language models (LLMs) can be adapted to be generalizable
policies for embodied visual tasks. Our approach, called Large LAnguage model
Reinforcement Learning Policy (LLaRP), adapts a pre-trained frozen LLM to take
as input text instructions and visual egocentric observations and output actions di-
rectly in the environment. Using reinforcement learning, we train LLaRP to see
and act solely through environmental interactions. We show that LLaRP is robust
to complex paraphrasings of task instructions and can generalize to new tasks that
require novel optimal behavior. In particular, on 1,000unseen tasks it achieves
42% success rate, 1.7x the success rate of other common learned baselines or zero-
shot applications of LLMs. Finally, to aid the community in studying language
conditioned, massively multi-task, embodied AI problems we release a novel
benchmark, Language Rearrangement, consisting of 150,000training and 1,000
testing tasks for language-conditioned rearrangement. Video examples of LLaRP
in unseen Language Rearrangement instructions are at https://llm-rl.github.io.
1 I NTRODUCTION
Large Language Models (LLMs), characterized as billion-parameter models trained on enormous
amounts of text data, have demonstrated unprecedented language understanding capabilities. Fur-
thermore, LLMs have demonstrated powerful capabilities beyond core language understanding prob-
lems, such as dialog systems (Thoppilan et al., 2022; Glaese et al., 2022), visual understanding prob-
lems (Alayrac et al., 2022; Li et al., 2023b; Peng et al., 2023; Koh et al., 2023), reasoning (Wei et al.,
2022; Lewkowycz et al., 2022), code generation (Chen et al., 2021b), embodied reasoning (Driess
et al., 2023), and robot control (Ahn et al., 2022). These capabilities often emerge in a zero-shot
fashion, without dedicated training data for each capability, indicating that LLMs contain knowledge
general and broad enough to apply to numerous domains. Furthermore, these capabilities emerge
despite that the input and output spaces in these domains are oftentimes not naturally expressed in
language, e. g. images as inputs, and robot commands as outputs.
A key objective in Embodied AI is generalizable decision-making that can transfer to novel tasks, so
it is natural to ask if the generalization abilities of LLMs can be incorporated into embodied prob-
lems. Existing advances in using LLMs for Embodied AI have relied on static expert datasets (Driess
et al., 2023; Brohan et al., 2023), which requires prohibitively large and expensive amounts of di-
verse expert data. Conversely, Embodied AI simulators enable agents to learn from an environment
through direct interaction, exploration, and reward feedback (Kolve et al., 2019; Szot et al., 2021;
Li et al., 2023a). However, the generalization capabilities of such agents to a large number of new
embodied tasks are not on par with the aforementioned domains.
LLMs have been shown to be applicable in online settings when the control domain is that of nat-
ural language, e.g., Reinforcement Learning from Human Feedback (RLHF) for multi-turn dialog
applications (Ouyang et al., 2022). In this work, we successfully show that LLMs can be adapted for
Reinforcement Learning (RL) problems in Embodied AI, using a method we call Large LAnguage
model Reinforcement learning Policy (LLaRP). We demonstrate advanced capabilities on a diverse
set of rearrangement tasks, where the input and output domains aren’t just language (see Fig. 1) In
particular, we demonstrate the following three contributions:
1arXiv:2310.17722v1 [cs.LG] 26 Oct 2023 |
RFeynman-plentySpace.pdf | Plenty of Room at the Bottom
Richard P. Feynman
(Dated: Dec. 1959)
This is the transcript of a talk presented by Richard P. Feynman to the American Physical Society
in Pasadena on December 1959, which explores the immense possibilities afforded by miniaturization.
I imagine experimental physicists must often look with
envy at men like Kamerlingh Onnes, who discovered a
field like low temperature, which seems to be bottomless
and in which one can go down and down. Such a man
is then a leader and has some temporary monopoly in
a scientific adventure. Percy Bridgman, in designing a
way to obtain higher pressures, opened up another new
field and was able to move into it and to lead us all along.
The development of ever higher vacuum was a continuing
development of the same kind.
I would like to describe a field, in which little has been
done, but in which an enormous amount can be done in
principle. This field is not quite the same as the others
in that it will not tell us much of fundamental physics (in
the sense of, “What are the strange particles?”) but it is
more like solid-state physics in the sense that it might tell
us much of great interest about the strange phenomena
that occur in complex situations. Furthermore, a point
that is most important is that it would have an enormous
number of technical applications.
What I want to talk about is the problem of manipu-
lating and controlling things on a small scale.
As soon as I mention this, people tell me about minia-
turization, and how far it has progressed today. They tell
me about electric motors that are the size of the nail on
your small finger. And there is a device on the market,
they tell me, by which you can write the Lord’s Prayer
on the head of a pin. But that’s nothing; that’s the most
primitive, halting step in the direction I intend to dis-
cuss. It is a staggeringly small world that is below. In
the year 2000, when they look back at this age, they will
wonder why it was not until the year 1960 that anybody
began seriously to move in this direction.
Why cannot we write the entire 24 volumes of the En-
cyclopedia Brittanica on the head of a pin?
Let’s see what would be involved. The head of a pin is
a sixteenth of an inch across. If you magnify it by 25,000
diameters, the area of the head of the pin is then equal to
the area of all the pages of the Encyclopaedia Brittanica.
Therefore, all it is necessary to do is to reduce in size
all the writing in the Encyclopaedia by 25,000 times. Is
that possible? The resolving power of the eye is about
1/120 of an inch—that is roughly the diameter of one of
the little dots on the fine half-tone reproductions in the
Encyclopaedia. This, when you demagnify it by 25,000
times, is still 80 angstroms in diameter—32 atoms across,
in an ordinary metal. In other words, one of those dots
still would contain in its area 1,000 atoms. So, each dotcan easily be adjusted in size as required by the photo-
engraving, and there is no question that there is enough
room on the head of a pin to put all of the Encyclopaedia
Brittanica.
Furthermore, it can be read if it is so written. Let’s
imagine that it is written in raised letters of metal; that
is, where the black is in the Encyclopedia, we have raised
letters of metal that are actually 1/25,000 of their ordi-
nary size. How would we read it?
If we had something written in such a way, we could
read it using techniques in common use today. (They will
undoubtedly find a better way when we do actually have
it written, but to make my point conservatively I shall
just take techniques we know today.) We would press
the metal into a plastic material and make a mold of it,
then peel the plastic off very carefully, evaporate silica
into the plastic to get a very thin film, then shadow it by
evaporating gold at an angle against the silica so that all
the little letters will appear clearly, dissolve the plastic
away from the silica film, and then look through it with
an electron microscope!
There is no question that if the thing were reduced by
25,000 times in the form of raised letters on the pin, it
would be easy for us to read it today. Furthermore; there
is no question that we would find it easy to make copies
of the master; we would just need to press the same metal
plate again into plastic and we would have another copy.
How do we write small?
The next question is: How do we write it? We have
no standard technique to do this now. But let me argue
that it is not as difficult as it first appears to be. We
can reverse the lenses of the electron microscope in or-
der to demagnify as well as magnify. A source of ions,
sent through the microscope lenses in reverse, could be
focused to a very small spot. We could write with that
spot like we write in a TV cathode ray oscilloscope, by
going across in lines, and having an adjustment which
determines the amount of material which is going to be
deposited as we scan in lines.
This method might be very slow because of space
charge limitations. There will be more rapid methods.
We could first make, perhaps by some photo process, a
screen which has holes in it in the form of the letters.
Then we would strike an arc behind the holes and draw
metallic ions through the holes; then we could again use
our system of lenses and make a small image in the form
of ions, which would deposit the metal on the pin.
A simpler way might be this (though I am not sure it |
NIPS-2017-deep-reinforcement-learning-from-human-preferences-Paper.pdf | Deep Reinforcement Learning
from Human Preferences
Paul F Christiano
OpenAI
paul@openai.comJan Leike
DeepMind
leike@google.comTom B Brown
Google Brain⇤
tombbrown@google.com
Miljan Martic
DeepMind
miljanm@google.comShane Legg
DeepMind
legg@google.comDario Amodei
OpenAI
damodei@openai.com
Abstract
For sophisticated reinforcement learning (RL) systems to interact usefully with
real-world environments, we need to communicate complex goals to these systems.
In this work, we explore goals defined in terms of (non-expert) human preferences
between pairs of trajectory segments. We show that this approach can effectively
solve complex RL tasks without access to the reward function, including Atari
games and simulated robot locomotion, while providing feedback on less than
1% of our agent’s interactions with the environment. This reduces the cost of
human oversight far enough that it can be practically applied to state-of-the-art
RL systems. To demonstrate the flexibility of our approach, we show that we can
successfully train complex novel behaviors with about an hour of human time.
These behaviors and environments are considerably more complex than any which
have been previously learned from human feedback.
1 Introduction
Recent success in scaling reinforcement learning (RL) to large problems has been driven in domains
that have a well-specified reward function (Mnih et al., 2015, 2016; Silver et al., 2016). Unfortunately,
many tasks involve goals that are complex, poorly-defined, or hard to specify. Overcoming this
limitation would greatly expand the possible impact of deep RL and could increase the reach of
machine learning more broadly.
For example, suppose that we wanted to use reinforcement learning to train a robot to clean a table or
scramble an egg. It’s not clear how to construct a suitable reward function, which will need to be a
function of the robot’s sensors. We could try to design a simple reward function that approximately
captures the intended behavior, but this will often result in behavior that optimizes our reward
function without actually satisfying our preferences. This difficulty underlies recent concerns about
misalignment between our values and the objectives of our RL systems (Bostrom, 2014; Russell,
2016; Amodei et al., 2016). If we could successfully communicate our actual objectives to our agents,
it would be a significant step towards addressing these concerns.
If we have demonstrations of the desired task, we can use inverse reinforcement learning (Ng and
Russell, 2000) or imitation learning to copy the demonstrated behavior. But these approaches are not
directly applicable to behaviors that are difficult for humans to demonstrate (such as controlling a
robot with many degrees of freedom but non-human morphology).
⇤Work done while at OpenAI.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. |
2403.20327.pdf | Gecko: Versatile Text Embeddings Distilled
from Large Language Models
Jinhyuk Lee*, Zhuyun Dai*, Xiaoqi Ren*, Blair Chen, Daniel Cer, Jeremy R. Cole, Kai Hui, Michael Boratko,
Rajvi Kapadia, Wen Ding, Yi Luan, Sai Meher Karthik Duddu, Gustavo Hernandez Abrego, Weiqiang Shi, Nithi
Gupta, Aditya Kusupati, Prateek Jain, Siddhartha Reddy Jonnalagadda, Ming-Wei Chang and Iftekhar Naim
*Equal contributions
We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval
performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a
retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using
an LLM. Next, we further refine the data quality by retrieving a set of candidate passages for each query,
and relabeling the positive and hard negative passages using the same LLM. The effectiveness of our
approach is demonstrated by the compactness of the Gecko. On the Massive Text Embedding Benchmark
(MTEB), Gecko with 256 embedding dimensions outperforms all existing entries with 768 embedding
size. Gecko with 768 embedding dimensions achieves an average score of 66.31, competing with 7x
larger models and 5x higher dimensional embeddings.
1. Introduction
Text embedding models represent natural language as dense vectors, positioning semantically similar
text near each other within the embedding space (Gao et al., 2021; Le and Mikolov, 2014; Reimers
and Gurevych, 2019). These embeddings are commonly used for a wide range of downstream tasks
including document retrieval, sentence similarity, classification, and clustering (Muennighoff et al.,
2023). Instead of building separate embedding models for each downstream task, recent efforts seek
to create a single embedding model supporting many tasks.
The recent development of general-purpose text embedding models presents a challenge: these
models require large amounts of training data to comprehensively cover desired domains and skills.
Recent embedding efforts have focused on using extensive collections of training examples (Li et al.,
2023; Wang et al., 2022). Large language models (LLMs) offer a powerful alternative, as they contain
vast knowledge across various domains and are known to be exceptional few-shot learners (Anil et al.,
2023; Brown et al., 2020). Recent work demonstrates the effectiveness of using LLMs for synthetic
data generation, but the focus has primarily been on augmenting existing human-labeled data or
improving performance in specific domains (Dai et al., 2022; Jeronymo et al., 2023). It motivates us
to re-examine: to what extent can we leverage LLMs directly to improve text embedding models?
In this work, we present Gecko, a highly versatile yet efficient embedding model, powered by the
vast world knowledge of LLMs. Our approach leverages insights from knowledge distillation to create
a two-step LLM-powered embedding model. Starting with a large corpus of (unlabeled) passages,
we use a few-shot prompted LLM to generate a relevant task and query for each passage, similar to
Dai et al. (2022) and Wang et al. (2023). We then embed the concatenated task and query using a
pretrained embedding model to obtain nearest neighbor passages, use an LLM to rerank the passages,
and obtain positive and negative passages based on the LLM scores. The reranking step is key to
enhance the quality as we discover that the best passage to answer the generated query often differs
from the original source passage. We show that using our LLM-based dataset, FRet, alone can lead to
significantly improvement, setting a strong baseline as a zero-shot embedding model on MTEB.
Corresponding author(s): jinhyuklee@google.com
©2024 Google DeepMind. All rights reservedarXiv:2403.20327v1 [cs.CL] 29 Mar 2024 |
1509.02971.pdf | Published as a conference paper at ICLR 2016
CONTINUOUS CONTROL WITH DEEP REINFORCEMENT
LEARNING
Timothy P. Lillicrap∗, Jonathan J. Hunt∗, Alexander Pritzel, Nicolas Heess,
Tom Erez, Yuval Tassa, David Silver & Daan Wierstra
Google Deepmind
London, UK
{countzero, jjhunt, apritzel, heess,
etom, tassa, davidsilver, wierstra }@ google.com
ABSTRACT
We adapt the ideas underlying the success of Deep Q-Learning to the continuous
action domain. We present an actor-critic, model-free algorithm based on the de-
terministic policy gradient that can operate over continuous action spaces. Using
the same learning algorithm, network architecture and hyper-parameters, our al-
gorithm robustly solves more than 20 simulated physics tasks, including classic
problems such as cartpole swing-up, dexterous manipulation, legged locomotion
and car driving. Our algorithm is able to find policies whose performance is com-
petitive with those found by a planning algorithm with full access to the dynamics
of the domain and its derivatives. We further demonstrate that for many of the
tasks the algorithm can learn policies “end-to-end”: directly from raw pixel in-
puts.
1 I NTRODUCTION
One of the primary goals of the field of artificial intelligence is to solve complex tasks from unpro-
cessed, high-dimensional, sensory input. Recently, significant progress has been made by combin-
ing advances in deep learning for sensory processing (Krizhevsky et al., 2012) with reinforcement
learning, resulting in the “Deep Q Network” (DQN) algorithm (Mnih et al., 2015) that is capable of
human level performance on many Atari video games using unprocessed pixels for input. To do so,
deep neural network function approximators were used to estimate the action-value function.
However, while DQN solves problems with high-dimensional observation spaces, it can only handle
discrete and low-dimensional action spaces. Many tasks of interest, most notably physical control
tasks, have continuous (real valued) and high dimensional action spaces. DQN cannot be straight-
forwardly applied to continuous domains since it relies on a finding the action that maximizes the
action-value function, which in the continuous valued case requires an iterative optimization process
at every step.
An obvious approach to adapting deep reinforcement learning methods such as DQN to continuous
domains is to to simply discretize the action space. However, this has many limitations, most no-
tably the curse of dimensionality: the number of actions increases exponentially with the number
of degrees of freedom. For example, a 7 degree of freedom system (as in the human arm) with the
coarsest discretization ai∈{−k,0,k}for each joint leads to an action space with dimensionality:
37= 2187 . The situation is even worse for tasks that require fine control of actions as they require
a correspondingly finer grained discretization, leading to an explosion of the number of discrete
actions. Such large action spaces are difficult to explore efficiently, and thus successfully training
DQN-like networks in this context is likely intractable. Additionally, naive discretization of action
spaces needlessly throws away information about the structure of the action domain, which may be
essential for solving many problems.
In this work we present a model-free, off-policy actor-critic algorithm using deep function approx-
imators that can learn policies in high-dimensional, continuous action spaces. Our work is based
∗These authors contributed equally.
1arXiv:1509.02971v6 [cs.LG] 5 Jul 2019 |
10.1101.2024.03.07.584001.pdf | Protein language models are biased by unequal sequence
sampling across the tree of life
Frances Ding frances@berkeley.edu
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Jacob Steinhardt jsteinhardt@berkeley.edu
Departments of Statistics and Electrical Engineering and Computer Sciences
University of California, Berkeley
Abstract
Protein language models (pLMs) trained on large protein sequence databases have been
used to understand disease and design novel proteins. In design tasks, the likelihood of a
protein sequence under a pLM is often used as a proxy for protein fitness, so it is critical
to understand what signals likelihoods capture. In this work we find that pLM likelihoods
unintentionally encode a species bias: likelihoods of protein sequences from certain species
are systematically higher, independent of the protein in question. We quantify this bias
and show that it arises in large part because of unequal species representation in popular
protein sequence databases. We further show that the bias can be detrimental for some
protein design applications, such as enhancing thermostability. These results highlight the
importance of understanding and curating pLM training data to mitigate biases and improve
protein design capabilities in under-explored parts of sequence space.
1 Introduction
Proteins are the building blocks and workhorses of life, performing essential roles in human and ecosystem
health. Inspired by advances in natural language processing, many different protein language models (pLMs)
have been trained to model the distribution of naturally occurring protein sequences (Rives et al., 2021;
Elnaggar et al., 2021;Madani et al., 2023;Lin et al., 2023;Alamdari et al., 2023). pLMs have been successfully
used to predict protein 3D structure (Lin et al., 2023), catalytic activity (Eom et al., 2024), and other
biophysical properties (Brandes et al., 2022;Jagota et al., 2023), generally with additional supervision for
fine-tuning. Excitingly, without needing additional supervision, likelihoods from pLMs have been shown to
correlate well with protein fitness, i.e. desirable qualities such as catalytic activity, stability, and binding
affinity (Meier et al., 2021;Notin et al., 2023a;Nijkamp et al., 2023).
Because of this correlation with fitness, pLM likelihoods are increasingly used in protein design. They have
been used to screen for potentially beneficial mutations (Johnson et al., 2023), to design libraries of protein
candidates with higher hit rates than previously state-of-the-art synthetic libraries (Shin et al., 2021), and to
efficiently evolve human antibodies without any additional supervision (Hie et al., 2023).
In this work we find that likelihoods from popular pLMs have a species bias: likelihoods of naturally occurring
protein sequences are systematically higher in certain species, which can be detrimental for some protein
design applications. We describe the extent of this species bias, show that it arises from imbalanced species
representation in the protein sequence databases used for training, and measure the impact of the bias on
protein design.
We first describe a stylized model for pLM training to show intuitively how this bias arises (Section 3), then
support our empirical claims with three main results in Sections 4-6. In Section 4we show that across the
many different proteins we study, certain species almost always have higher pLM likelihoods for their protein
1. CC-BY 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted March 12, 2024. ; https://doi.org/10.1101/2024.03.07.584001doi: bioRxiv preprint |
2305.14314.pdf | QL ORA: Efficient Finetuning of Quantized LLMs
Tim Dettmers∗Artidoro Pagnoni∗Ari Holtzman
Luke Zettlemoyer
University of Washington
{dettmers,artidoro,ahai,lsz}@cs.washington.edu
Abstract
We present QLORA, an efficient finetuning approach that reduces memory us-
age enough to finetune a 65B parameter model on a single 48GB GPU while
preserving full 16-bit finetuning task performance. QLORAbackpropagates gradi-
ents through a frozen, 4-bit quantized pretrained language model into Low Rank
Adapters (LoRA). Our best model family, which we name Guanaco , outperforms
all previous openly released models on the Vicuna benchmark, reaching 99.3%
of the performance level of ChatGPT while only requiring 24 hours of finetuning
on a single GPU. QLORAintroduces a number of innovations to save memory
without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that
is information theoretically optimal for normally distributed weights (b) Double
Quantization to reduce the average memory footprint by quantizing the quantization
constants, and (c) Paged Optimizers to manage memory spikes. We use QLORA
to finetune more than 1,000 models, providing a detailed analysis of instruction
following and chatbot performance across 8 instruction datasets, multiple model
types (LLaMA, T5), and model scales that would be infeasible to run with regular
finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA
finetuning on a small high-quality dataset leads to state-of-the-art results, even
when using smaller models than the previous SoTA. We provide a detailed analysis
of chatbot performance based on both human and GPT-4 evaluations showing that
GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Fur-
thermore, we find that current chatbot benchmarks are not trustworthy to accurately
evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates
where Guanaco fails compared to ChatGPT. We release all of our models and code,
including CUDA kernels for 4-bit training.2
1 Introduction
Finetuning large language models (LLMs) is a highly effective way to improve their performance,
[40,62,43,61,59,37] and to add desirable or remove undesirable behaviors [ 43,2,4]. However,
finetuning very large models is prohibitively expensive; regular 16-bit finetuning of a LLaMA 65B
parameter model [ 57] requires more than 780 GB of GPU memory. While recent quantization
methods can reduce the memory footprint of LLMs [ 14,13,18,66], such techniques only work for
inference and break down during training [65].
We demonstrate for the first time that it is possible to finetune a quantized 4-bit model without any
performance degradation. Our method, QLORA, uses a novel high-precision technique to quantize
a pretrained model to 4-bit, then adds a small set of learnable Low-rank Adapter weights [ 28]
∗Equal contribution.
2https://github.com/artidoro/qlora andhttps://github.com/TimDettmers/bitsandbytes
Preprint. Under review.arXiv:2305.14314v1 [cs.LG] 23 May 2023 |
2312.16682.pdf | Some things are more CRINGE than others:
Preference Optimization with the Pairwise Cringe Loss
Jing Xu1Andrew Lee1Sainbayar Sukhbaatar1Jason Weston1
Abstract
Practitioners commonly align large language mod-
els using pairwise preferences, i.e., given labels
of the type response A is preferred to response B
for a given input. Perhaps less commonly, meth-
ods have also been developed for binary feed-
back, i.e. training models given labels of type
response A is good or bad. We show how an ex-
isting performant binary feedback method, the
Cringe Loss (Adolphs et al., 2022), can be gen-
eralized to the pairwise preference setting using
a simple soft margin extension. Pairwise Cringe
Loss is straightforward to implement and efficient
to train, and we find it outperforms state-of-the-art
preference optimization algorithms such as PPO
and DPO on the AlpacaFarm benchmark.
1. Introduction
Aligning large language models (LLMs) after pre-training
can give large gains in their performance for downstream
tasks for users (Roller et al., 2020; Gururangan et al., 2020;
Ouyang et al., 2022). Exactly how to implement this align-
ment depends on the labels one collects. Given positive
examples of correct behavior one can perform supervised
fine-tuning (SFT) using standard likelihood-based training.
Given both positive and negative examples ( binary feed-
back), one can use methods such as unlikelihood training
on the negative examples (Welleck et al., 2020), or the more
performant Cringe Loss (Adolphs et al., 2022). However, a
more common approach than using binary feedback, pop-
ularized by work such as Stiennon et al. (2020); Ouyang
et al. (2022); Touvron et al. (2023) is to collect pairwise
preferences of the type response A is better than response B
for a given input. In this case one can use methods such as
PPO (Schulman et al., 2017), DPO (Rafailov et al., 2023)
and other variants.
In this work we seek to compare SFT, binary feedback and
pairwise preference algorithms, and to ask the question: can
1Meta. Correspondence to: Jing Xu <jingxu23@meta.com >.one convert existing binary feedback algorithms to use pair-
wise preference data? In particular the Cringe Loss is a
method for binary feedback, which we show can be general-
ized to the pairwise preference case. The Cringe Loss works
as follows: positive examples use the standard likelihood
training loss, while for a given negative example it contrasts
each token in the negative sequence against other likely
tokens – to encourage the negative sequence to no longer
be the top-ranked sequence. After training on the initial
feedback data, the method is then iterated by labeling data
using the improved model, which was shown to improve
results further. Cringe Loss was shown to perform well with
binary feedback data compared to competing methods, such
as SFT, unlikelihood loss and best-of-N reranking (Adolphs
et al., 2022) and for improving large-scale dialogue systems
(Xu et al., 2023b). However, collecting and using pairwise
preferences for training has currently proven a more popular
approach to developing aligned LLMs.
We thus explore generalizing the Cringe Loss to the pairwise
preference setting. We hence develop the Pairwise Cringe
Loss, by using a differentiable margin-based loss on the
pair of responses. In particular, we add a margin-based
multiplier to the Cringe Loss to turn it on or off depending
on how much probability distance is between the pair. When
the preferred response A becomes much more likely than
the response B, the Cringe Loss is turned off so that the
model capacity is better spent on pairs that are closer in
probabilities.
We experimentally compare competing approaches, includ-
ing binary and pairwise variants of Cringe Loss. The first
task is to reduce repetitions (Arora et al., 2022; Welleck
et al., 2020), which can be measured accurately so it gives
us more control. In this task, we find that Pairwise Cringe
outperforms Binary Cringe, and has performance similar to
DPO, while the Pairwise Cringe generations have slightly
better quality. Next, we employ a more realistic setup us-
ing the AlpacaFarm (Dubois et al., 2023) benchmark that
provides pairwise preference data for general instruction fol-
lowing. Pairwise Cringe Loss again outperforms the Binary
Cringe variant, in addition to SFT, and more importantly out-
performs state-of-the-art methods DPO and PPO. Pairwise
Cringe Loss is simple to implement and efficient to train,
1arXiv:2312.16682v1 [cs.CL] 27 Dec 2023 |
5175-reward-design-with-language-mo.pdf | Published as a conference paper at ICLR 2023
REWARD DESIGN WITH LANGUAGE MODELS
Minae Kwon, Sang Michael Xie, Kalesha Bullard†, Dorsa Sadigh
Stanford University, DeepMind†
{minae ,xie,dorsa }@cs.stanford.edu ,ksbullard@deepmind.com†
ABSTRACT
Reward design in reinforcement learning (RL) is challenging since specifying human
notions of desired behavior may be difficult via reward functions or require many expert
demonstrations. Can we instead cheaply design rewards using a natural language inter-
face? This paper explores how to simplify reward design by prompting a large language
model (LLM) such as GPT-3 as a proxy reward function, where the user provides a tex-
tual prompt containing a few examples (few-shot) or a description (zero-shot) of the de-
sired behavior. Our approach leverages this proxy reward function in an RL framework.
Specifically, users specify a prompt once at the beginning of training. During training,
the LLM evaluates an RL agent’s behavior against the desired behavior described by the
prompt and outputs a corresponding reward signal. The RL agent then uses this reward
to update its behavior. We evaluate whether our approach can train agents aligned with
user objectives in the Ultimatum Game, matrix games, and the DEALORNODEAL
negotiation task. In all three tasks, we show that RL agents trained with our framework
are well-aligned with the user’s objectives and outperform RL agents trained with
reward functions learned via supervised learning. Code and prompts can be found here.
1 I NTRODUCTION
Autonomous agents are becoming increasingly capable with the rise of compute and data. This underscores
the importance for human users to be able to control what policies the agents learn and ensure the policies
are aligned with their objectives. For instance, imagine training an agent to represent users in a salary
negotiation. A working mother fighting for a livable wage may want their agent to be stubborn whereas a
new hire looking to develop a good relationship with the company may want their agent to be more versatile.
Currently, users specify desired behaviors by 1) designing reward functions or 2) providing large amounts of
labeled data. Both approaches are challenging and impractical for different reasons. Designing reward func-
tions is not an intuitive way to specify preferences. For instance, it isn’t straightforward how to write a reward
function for a “versatile” negotiator. Furthermore, designing reward functions that balance between different
objectives — also known as the “reward design problem” — is notoriously difficult because agents are sus-
ceptible to reward hacking (Amodei et al., 2016; Hadfield-Menell et al., 2017). On the other hand, one can
learn a reward function from labeled examples. However, that is not possible with a single example; we need
large amounts of labeled data to capture the nuances of different users’ preferences and objectives, which
has shown to be costly (Zhang et al., 2016). Additionally, both approaches do not generalize well to new
users who have different objectives — we would have to re-design our reward functions or re-collect data.
Our aim is to create an easier way for users to communicate their preferences, where the interface is
more intuitive than crafting a reward function and where they can cheaply specify their preferences with
no more than a few examples. To do this, we leverage large language models (LLMs) that are trained
on internet-scale text data and have shown an impressive ability to learn in-context from few or zero
examples (Brown et al., 2020). Our key insight is that
The scale of data that LLMs have been trained on make them great in-context learners
and also allows them to capture meaningful commonsense priors about human behavior.
Given a few examples or a description demonstrating the user’s objective, an LLM
should be able to provide an accurate instantiation of reward values on a new test
example, allowing for easier generalization to new objectives.
To this end, we explore how to prompt an LLM as a proxy reward function to train RL agents from user
inputs. In our approach, the user specifies an objective with a natural language prompt. Objectives can
1 |
2005.14165.pdf | Language Models are Few-Shot Learners
Tom B. Brown∗Benjamin Mann∗Nick Ryder∗Melanie Subbiah∗
Jared Kaplan†Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry
Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan
Rewon Child Aditya Ramesh Daniel M. Ziegler Jeffrey Wu Clemens Winter
Christopher Hesse Mark Chen Eric Sigler Mateusz Litwin Scott Gray
Benjamin Chess Jack Clark Christopher Berner
Sam McCandlish Alec Radford Ilya Sutskever Dario Amodei
OpenAI
Abstract
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training
on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic
in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of
thousands of examples. By contrast, humans can generally perform a new language task from only
a few examples or from simple instructions – something which current NLP systems still largely
struggle to do. Here we show that scaling up language models greatly improves task-agnostic,
few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-
tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion
parameters, 10x more than any previous non-sparse language model, and test its performance in
the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning,
with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3
achieves strong performance on many NLP datasets, including translation, question-answering, and
cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as
unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same
time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some
datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally,
we find that GPT-3 can generate samples of news articles which human evaluators have difficulty
distinguishing from articles written by humans. We discuss broader societal impacts of this finding
and of GPT-3 in general.
∗Equal contribution
†Johns Hopkins University, OpenAI
Author contributions listed at end of paper.arXiv:2005.14165v4 [cs.CL] 22 Jul 2020 |
2304.14767.pdf | Dissecting Recall of Factual Associations in
Auto-Regressive Language Models
Mor Geva1Jasmijn Bastings1Katja Filippova1Amir Globerson2,3
1Google DeepMind2Tel Aviv University3Google Research
{pipek, bastings, katjaf, amirg}@google.com
Abstract
Transformer-based language models (LMs)
are known to capture factual knowledge in
their parameters. While previous work looked
intowhere factual associations are stored, only
little is known about how they are retrieved in-
ternally during inference. We investigate this
question through the lens of information flow.
Given a subject-relation query, we study how
the model aggregates information about the
subject and relation to predict the correct at-
tribute. With interventions on attention edges,
we first identify two critical points where in-
formation propagates to the prediction: one
from the relation positions followed by another
from the subject positions. Next, by analyz-
ing the information at these points, we unveil a
three-step internal mechanism for attribute ex-
traction. First, the representation at the last-
subject position goes through an enrichment
process, driven by the early MLP sublayers,
to encode many subject-related attributes. Sec-
ond, information from the relation propagates
to the prediction. Third, the prediction repre-
sentation “queries” the enriched subject to ex-
tract the attribute. Perhaps surprisingly, this
extraction is typically done via attention heads,
which often encode subject-attribute mappings
in their parameters. Overall, our findings in-
troduce a comprehensive view of how factual
associations are stored and extracted internally
in LMs, facilitating future research on knowl-
edge localization and editing.
1 Introduction
Transformer-based language models (LMs) cap-
ture vast amounts of factual knowledge (Roberts
et al., 2020; Jiang et al., 2020), which they en-
code in their parameters and recall during inference
(Petroni et al., 2019; Cohen et al., 2023). While
recent works focused on identifying where factual
knowledge is encoded in the network (Meng et al.,
2022a; Dai et al., 2022; Wallat et al., 2020), it
remains unclear how this knowledge is extracted
from the model parameters during inference.
Apple
Apple
Beats Music is owned byMLP ATTN
(A) Subject
enrichment
(B) Relation
propagation (C) Attribute
Extraction Figure 1: Illustration of our findings: given subject-
relation query, a subject representation is constructed
via attributes’ enrichment from MLP sublayers (A),
while the relation propagates to the prediction (B). The
attribute is then extracted by the MHSA sublayers (C).
In this work, we investigate this question through
the lens of information flow, across layers and input
positions (Elhage et al., 2021). We focus on a ba-
sic information extraction setting, where a subject
and a relation are given in a sentence (e.g. “Beats
Music is owned by” ), and the next token is the cor-
responding attribute (i.e. “Apple” ). We restrict
our analysis to cases where the model predicts the
correct attribute as the next token, and set out to un-
derstand how internal representations evolve across
the layers to produce the output.
Focusing on modern auto-regressive decoder-
only LMs, such an extraction process could be
implemented in many different ways. Informally,
the model needs to “merge” the subject and relation
in order to be able to extract the right attribute, and
this merger can be conducted at different layers andarXiv:2304.14767v1 [cs.CL] 28 Apr 2023 |
2307.00524.pdf | Large Language Models Enable Few-Shot Clustering
Vijay Viswanathan1, Kiril Gashteovski2,
Carolin Lawrence2, Tongshuang Wu1, Graham Neubig1, 3
1Carnegie Mellon University,2NEC Laboratories Europe,3Inspired Cognition
Abstract
Unlike traditional unsupervised clustering,
semi-supervised clustering allows users to pro-
vide meaningful structure to the data, which
helps the clustering algorithm to match the
user’s intent. Existing approaches to semi-
supervised clustering require a significant
amount of feedback from an expert to improve
the clusters. In this paper, we ask whether
a large language model can amplify an ex-
pert’s guidance to enable query-efficient, few-
shot semi-supervised text clustering. We show
that LLMs are surprisingly effective at im-
proving clustering. We explore three stages
where LLMs can be incorporated into cluster-
ing: before clustering (improving input fea-
tures), during clustering (by providing con-
straints to the clusterer), and after clustering
(using LLMs post-correction). We find incor-
porating LLMs in the first two stages can rou-
tinely provide significant improvements in clus-
ter quality, and that LLMs enable a user to
make trade-offs between cost and accuracy to
produce desired clusters. We release our code
and LLM prompts for the public to use.1
1 Introduction
Unsupervised clustering aims to do an impossible
task: organize data in a way that satisfies a domain
expert’s needs without any specification of what
those needs are. Clustering, by its nature, is fun-
damentally an underspecified problem. According
to Caruana (2013), this underspecification makes
clustering “probably approximately useless.”
Semi-supervised clustering, on the other hand,
aims to solve this problem by enabling the domain
expert to guide the clustering algorithm (Bae et al.,
2020). Prior works have introduced different types
of interaction between an expert and a clustering
algorithm, such as initializing clusters with hand-
picked seed points (Basu et al., 2002), specifying
1https://github.com/viswavi/
few-shot-clustering
LLM Traditional
Semi-Supervised
Clustering
LLM-Guided
Few-Shot
Clustering Figure 1: In traditional semi-supervised clustering, a
user provides a large amount of feedback to the clusterer.
In our approach, the user prompts an LLM with a small
amount of feedback. The LLM then generates a large
amount of pseudo-feedback for the clusterer.
pairwise constraints (Basu et al., 2004; Zhang et al.,
2019), providing feature feedback (Dasgupta and
Ng, 2010), splitting or merging clusters (Awasthi
et al., 2013), or locking one cluster and refining the
rest (Coden et al., 2017). These interfaces have all
been shown to give experts control of the final clus-
ters. However, they require significant effort from
the expert. For example, in a simulation that uses
split/merge, pairwise constraint, and lock/refine in-
teractions (Coden et al., 2017), it took between 20
and 100 human-machine interactions to get any
clustering algorithm to produce clusters that fit the
human’s needs. Therefore, for large, real-world
datasets with a large number of possible clusters,
the feedback cost required by interactive clustering
algorithms can be immense.
Building on a body of recent work that uses
Large Language Models (LLMs) as noisy simu-
lations of human decision-making (Fu et al., 2023;
Horton, 2023; Park et al., 2023), we propose a dif-
ferent approach for semi-supervised text clustering.
In particular, we answer the following research
question: Can an expert provide a few demonstra-
tions of their desired interaction (e.g., pairwise
constraints) to a large language model, then let the
LLM direct the clustering algorithm?arXiv:2307.00524v1 [cs.CL] 2 Jul 2023 |
deep-boltzmann-machines.pdf | Deep BoltzmannMachines
Ruslan Salakhutdinov
DepartmentofComputerScience
UniversityofToronto
rsalakhu@cs.toronto.eduGeoffreyHinton
DepartmentofComputerScience
UniversityofToronto
hinton@cs.toronto.edu
Abstract
We present a new learning algorithm for Boltz-
mann machines that contain many layers of hid-
den variables. Data-dependent expectations are
estimated using a variational approximationthat
tends to focus on a single mode, and data-
independent expectations are approximated us-
ing persistent Markov chains. The use of two
quite different techniques for estimating the two
types of expectation that enter into the gradient
of the log-likelihood makes it practical to learn
Boltzmann machines with multiple hidden lay-
ers andmillionsof parameters. The learningcan
be mademoreefficient by usinga layer-by-layer
“pre-training” phase that allows variational in-
ference to be initialized with a single bottom-
up pass. We present results on the MNIST and
NORB datasets showing that deep Boltzmann
machines learn good generativemodels and per-
form well on handwritten digit and visual object
recognitiontasks.
1 Introduction
The original learning algorithm for Boltzmann machines
(Hinton and Sejnowski, 1983) required randomly initial-
ized Markov chains to approach their equilibrium distri-
butions in order to estimate the data-dependent and data-
independent expectations that a connected pair of binary
variableswouldbothbeon. Thedifferenceofthesetwoex-
pectationsisthegradientrequiredformaximumlikelihood
learning. Even with the help of simulated annealing, this
learning procedure was too slow to be practical. Learning
canbemademuchmoreefficientinarestrictedBoltzmann
machine(RBM),whichhasnoconnectionsbetweenhidden
Appearing in Proceedings of the 12thInternational Confe-rence
onArtificialIntelligenceandStatistics(AISTATS)2009,C learwa-
terBeach,Florida,USA.Volume5ofJMLR:W&CP5. Copyright
2009 by the authors.units(Hinton,2002). Multiplehiddenlayerscanbelearned
by treating the hidden activities of one RBM as the data
for training a higher-level RBM (Hinton et al., 2006; Hin-
ton and Salakhutdinov,2006). However, if multiple layers
arelearnedinthisgreedy,layer-by-layerway,theresulti ng
composite model is nota multilayer Boltzmann machine
(Hintonetal.,2006). Itisahybridgenerativemodelcalled
a“deepbeliefnet”thathasundirectedconnectionsbetween
its top two layers and downward directed connections be-
tweenall itslowerlayers.
In this paper we present a much more efficient learning
procedure for fully general Boltzmann machines. We also
show that if the connections between hidden units are re-
stricted in such a way that the hidden units form multi-
ple layers, it is possible to use a stack of slightly modified
RBM’s to initialize the weights of a deep Boltzmann ma-
chinebeforeapplyingournewlearningprocedure.
2 Boltzmann Machines (BM's)
A Boltzmann machine is a network of symmetrically cou-
pledstochasticbinaryunits. Itcontainsasetofvisibleun its
v∈{0,1}D, and a set of hidden units h∈{0,1}P(see
Fig.1). Theenergyofthestate {v,h}is definedas:
E(v,h;θ) =−1
2v⊤Lv−1
2h⊤Jh−v⊤Wh,(1)
where θ={W,L,J}arethemodelparameters1:W,L,J
represent visible-to-hidden, visible-to-visible, and hi dden-
to-hidden symmetric interaction terms. The diagonal ele-
ments of LandJare set to 0. The probability that the
modelassignstoa visiblevector vis:
p(v;θ) =p∗(v;θ)
Z(θ)=1
Z(θ)∑
hexp (−E(v,h;θ)),(2)
Z(θ) =∑
v∑
hexp (−E(v,h;θ)),(3)
where p∗denotes unnormalized probability, and Z(θ)is
the partition function. The conditional distributions over
1We have omittedthe bias terms for clarityof presentation |
2305.16264.pdf | Scaling Data-Constrained Language Models
Niklas Muennighoff1Alexander M. Rush1Boaz Barak2Teven Le Scao1
Aleksandra Piktus1Nouamane Tazi1Sampo Pyysalo3Thomas Wolf1Colin Raffel1
1Hugging Face2Harvard University3University of Turku
n.muennighoff@gmail.com
Abstract
The current trend of scaling language models involves increasing both parameter
count and training dataset size. Extrapolating this trend suggests that training
dataset size may soon be limited by the amount of text data available on the internet.
Motivated by this limit, we investigate scaling language models in data-constrained
regimes. Specifically, we run a large set of experiments varying the extent of data
repetition and compute budget, ranging up to 900 billion training tokens and 9
billion parameter models. We find that with constrained data for a fixed compute
budget, training with up to 4 epochs of repeated data yields negligible changes to
loss compared to having unique data. However, with more repetition, the value of
adding compute eventually decays to zero. We propose and empirically validate
a scaling law for compute optimality that accounts for the decreasing value of
repeated tokens and excess parameters. Finally, we experiment with approaches
mitigating data scarcity, including augmenting the training dataset with code data
or removing commonly used filters. Models and datasets from our 400 training runs
are freely available at https://github.com/huggingface/datablations .
12B
(1)48B
(4)120B
(10)480B
(40)1.2T
(100)
T okens
(Epochs)2.02.22.42.62.83.03.23.4Final test loss
Up to4 epochs
repeating is almost
as good as new dataRapidly diminishing
returns for
more repetitionsAt40 epochs,
repeating is worthlessReturn on compute when repeating
178B
(7.1)242B
(9.7)
T okens
(Epochs)6.34B8.67BParameters
Loss: 2.376
Loss: 2.3591022 FLOPsAllocating compute when repeating
Data-Constrained Scaling Laws
Models trained
Loss assuming repeated data is worth the same as new data
Loss predicted by our data-constrained scaling lawsRegime of same compute (IsoFLOP)
Efficient frontier assuming repeated data is worth the same as new data
Efficient frontier predicted by our data-constrained scaling laws
Figure 1: Return andAllocation when repeating data. (Left): Loss of LLMs (4.2B parameters)
scaled on repeated data decays predictably (§6). (Right): To maximize performance when repeating,
our data-constrained scaling laws and empirical data suggest training smaller models for more epochs
in contrast to what assuming Chinchilla scaling laws [ 42] hold for repeated data would predict (§5).
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2305.16264v4 [cs.CL] 26 Oct 2023 |
Estimation-of-Entropy-and-Mutual-Information.pdf | ARTICLE Communicated by Jonathan Victor
Estimation of Entropy and Mutual Information
Liam Paninski
liam@cns.nyu.edu
Center for Neural Science, New York University, New York, NY 10003, U.S.A.
We present some new results on the nonparametric estimation of entropy
and mutual information. First, we use an exact local expansion of theentropy function to prove almost sure consistency and central limit the-orems for three of the most commonly used discretized information esti-mators. The setup is related to Grenander’s method of sieves and placesno assumptions on the underlying probability measure generating thedata. Second, we prove a converse to these consistency theorems, demon-strating that a misapplication of the most common estimation techniquesleads to an arbitrarily poor estimate of the true information, even givenunlimited data. This “inconsistency” theorem leads to an analytical ap-proximation of the bias, valid in surprisingly small sample regimes andmore accurate than the usual
1
Nformula of Miller and Madow over a large
region of parameter space. The two most practical implications of theseresults are negative: (1) information estimates in a certain data regime arelikely contaminated by bias, even if “bias-corrected” estimators are used,and (2) confidence intervals calculated by standard techniques drasticallyunderestimate the error of the most common estimation methods.
Finally, we note a very useful connection between the bias of entropy
estimators and a certain polynomial approximation problem. By castingbias calculation problems in this approximation theory framework, weobtain the best possible generalization of known asymptotic bias results.More interesting, this framework leads to an estimator with some niceproperties: the estimator comes equipped with rigorous bounds on themaximum error over all possible underlying probability distributions,and this maximum error turns out to be surprisingly small. We demon-strate the application of this new estimator on both real and simulateddata.
1 Introduction
The mathematical theory of information transmission represents a pinna-
cle of statistical research: the ideas are at once beautiful and applicableto a remarkably wide variety of questions. While psychologists and neu-rophysiologists began to apply these concepts almost immediately aftertheir introduction, the past decade has seen a dramatic increase in the
Neural Computation 15, 1191–1253 (2003) c⃝2003 Massachusetts Institute of Technology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.