filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
103-112.pdf | Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2010)
M. Otaduy and Z. Popovic (Editors)
A Bayesian Interactive Optimization Approach to Procedural
Animation Design
Eric Brochu Tyson Brochu Nando de Freitas
University of British Columbia
Abstract
The computer graphics and animation fields are filled with applications that require the setting of tricky param-eters. In many cases, the models are complex and the parameters unintuitive for non-experts. In this paper, wepresent an optimization method for setting parameters of a procedural fluid animation system by showing the userexamples of different parametrized animations and asking for feedback. Our method employs the Bayesian tech-nique of bringing in “prior” belief based on previous runs of the system and/or expert knowledge, to assist users infinding good parameter settings in as few steps as possible. To do this, we introduce novel extensions to Bayesianoptimization, which permit effective learning for parameter-based procedural animation applications. We showthat even when users are trying to find a variety of different target animations, the system can learn and improve.We demonstrate the effectiveness of our method compared to related active learning methods. We also present aworking application for assisting animators in the challenging task of designing curl-based velocity fields, evenwith minimal domain knowledge other than identifying when a simulation “looks right”.
Categories and Subject Descriptors
(according to ACM CCS) : Learning [I.2.6]: Parameter Learning.—User Inter-
faces [H.5.2]: Interaction Styles.—Three-Dimensional Graphics and Realism [I.3.7]: Animation.—
1 Introduction
Procedural methods for generating animation have long been
used by visual effects and games studios due to their effi-ciency and artist controllability. However, this control comeswith a cost: a set of often unintuitive parameters confrontsthe user of a procedural animation system. The desired endresult is often identifiable by the user, but these parametersmust be tuned in a tedious trial-and-error process.
For example, realistic animation of smoke can be achieved
by driving a particle system through a simple combinationof vortex rings and curl noise [BHN07]. However even thesetwo relatively simple procedural methods are influenced byseveral parameters: The velocity, radius and magnitude ofthe vortex rings, and the length scale and magnitude of thecurl noise. Adding more procedural “flow primitives”, suchas uniform and vortical flows, sources and sinks [WH91],turbulent wind [SF93], vortex particles [SRF05], and vortexfilaments [AN05] can produce a wider variety of animations,but each of these primitives carries its own set of associatedparameters. These parameters can interact in subtle and non-intuitive ways, and small adjustments to certain settings mayresult in non-uniform changes in the appearance.
Brochu et al. [BGdF07, BdFG07] propose a Bayesian op-
timization technique to assist artists with parameter tuningfor bidirectional reflectance distribution functions (BRDF s).
In their iterative scheme, the algorithm selects two sets ofparameters and generates example images from them. Theuser selects the preferred image and the algorithm incorpo-rates this feedback to learn a model of the user’s valuation
function over the domain of parameter values. Given thisvaluation function, the algorithm is able to select parame-ters to generate simulations that are likely to be closer to theones wanted by the artist. The process is repeated until theuser is satisfied with the results.
During the development of a procedural smoke anima-
tion system, we found ourselves with a parameterized sys-tem with 12 continuous parameters. Setting these was a chal-lenge for the developers, let alone other users, so we lookedto adapt [BdFG07]. In the process, though, we found that themodel as presented was unsuitable for our procedural anima-tion. In particular, we identified several limitations:
c⃝The Eurographics Association 2010.
DOI: 10.2312/SCA/SCA10/103-112 |
2303.01469.pdf | Consistency Models
Yang Song1Prafulla Dhariwal1Mark Chen1Ilya Sutskever1
Abstract
Diffusion models have significantly advanced the
fields of image, audio, and video generation, but
they depend on an iterative sampling process that
causes slow generation. To overcome this limita-
tion, we propose consistency models , a new fam-
ily of models that generate high quality samples
by directly mapping noise to data. They support
fast one-step generation by design, while still al-
lowing multistep sampling to trade compute for
sample quality. They also support zero-shot data
editing, such as image inpainting, colorization,
and super-resolution, without requiring explicit
training on these tasks. Consistency models can
be trained either by distilling pre-trained diffu-
sion models, or as standalone generative models
altogether. Through extensive experiments, we
demonstrate that they outperform existing distilla-
tion techniques for diffusion models in one- and
few-step sampling, achieving the new state-of-
the-art FID of 3.55 on CIFAR-10 and 6.20 on
ImageNet 64ˆ64for one-step generation. When
trained in isolation, consistency models become a
new family of generative models that can outper-
form existing one-step, non-adversarial generative
models on standard benchmarks such as CIFAR-
10, ImageNet 64ˆ64and LSUN 256ˆ256.
1. Introduction
Diffusion models (Sohl-Dickstein et al., 2015; Song & Er-
mon, 2019; 2020; Ho et al., 2020; Song et al., 2021), also
known as score-based generative models, have achieved
unprecedented success across multiple fields, including im-
age generation (Dhariwal & Nichol, 2021; Nichol et al.,
2021; Ramesh et al., 2022; Saharia et al., 2022; Rombach
et al., 2022), audio synthesis (Kong et al., 2020; Chen et al.,
2021; Popov et al., 2021), and video generation (Ho et al.,
1OpenAI, San Francisco, CA 94110, USA. Correspondence to:
Yang Song <songyang@openai.com >.
Proceedings of the 40thInternational Conference on Machine
Learning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
Figure 1: Given a Probability Flow (PF) ODE that smoothly
converts data to noise, we learn to map any point ( e.g.,xt,
xt1, andxT) on the ODE trajectory to its origin ( e.g.,x0)
for generative modeling. Models of these mappings are
called consistency models, as their outputs are trained to be
consistent for points on the same trajectory.
2022b;a). A key feature of diffusion models is the iterative
sampling process which progressively removes noise from
random initial vectors. This iterative process provides a
flexible trade-off of compute and sample quality, as using
extra compute for more iterations usually yields samples
of better quality. It is also the crux of many zero-shot data
editing capabilities of diffusion models, enabling them to
solve challenging inverse problems ranging from image
inpainting, colorization, stroke-guided image editing, to
Computed Tomography and Magnetic Resonance Imaging
(Song & Ermon, 2019; Song et al., 2021; 2022; 2023; Kawar
et al., 2021; 2022; Chung et al., 2023; Meng et al., 2021).
However, compared to single-step generative models like
GANs (Goodfellow et al., 2014), V AEs (Kingma & Welling,
2014; Rezende et al., 2014), or normalizing flows (Dinh
et al., 2015; 2017; Kingma & Dhariwal, 2018), the iterative
generation procedure of diffusion models typically requires
10–2000 times more compute for sample generation (Song
& Ermon, 2020; Ho et al., 2020; Song et al., 2021; Zhang
& Chen, 2022; Lu et al., 2022), causing slow inference and
limited real-time applications.
Our objective is to create generative models that facilitate ef-
ficient, single-step generation without sacrificing important
advantages of iterative sampling, such as trading compute
for sample quality when necessary, as well as performing
zero-shot data editing tasks. As illustrated in Fig. 1, we
build on top of the probability flow (PF) ordinary differen-
tial equation (ODE) in continuous-time diffusion models
(Song et al., 2021), whose trajectories smoothly transition
1arXiv:2303.01469v2 [cs.LG] 31 May 2023 |
2402.08797.pdf | Computing Power and the
Governance of Artificial Intelligence
Girish Sastry,∗†1Lennart Heim,∗†2Haydn Belfield,∗†3
Markus Anderljung,∗2Miles Brundage,∗1Julian Hazell,∗2,4Cullen O’Keefe,∗1,5
Gillian K. Hadfield,∗6,7Richard Ngo,1Konstantin Pilz,8George Gor,9
Emma Bluemke,2Sarah Shoker,1Janet Egan,10Robert F . Trager,11
Shahar Avin,12Adrian Weller,13Yoshua Bengio,14Diane Coyle15
1OpenAI,2Centre for the Governance of AI (GovAI),
3Leverhulme Centre for the Future of Intelligence, Uni. of Cambridge,
4Oxford Internet Institute,5Institute for Law & AI,6University of Toronto
7Vector Institute for AI,8Georgetown University,9ILINA Program,10Harvard Kennedy School,
11AI Governance Institute, Uni. of Oxford,12Centre for the Study of Existential Risk, Uni. of Cambridge,
13Uni. of Cambridge,14Uni. of Montreal /Mila,15Bennett Institute, Uni. of Cambridge
February 14, 2024
Abstract
Computing power, or "compute," is crucial for the development and deployment of artificial intelligence (AI)
capabilities. As a result, governments and companies have started to leverage compute as a means to govern AI. For
example, governments are investing in domestic compute capacity, controlling the flow of compute to competing
countries, and subsidizing compute access to certain sectors. However, these efforts only scratch the surface of
how compute can be used to govern AI development and deployment. Relative to other key inputs to AI (data and
algorithms), AI-relevant compute is a particularly effective point of intervention: it is detectable ,excludable , and
quantifiable , and is produced via an extremely concentrated supply chain . These characteristics, alongside the singular
importance of compute for cutting-edge AI models, suggest that governing compute can contribute to achieving
common policy objectives, such as ensuring the safety and beneficial use of AI. More precisely, policymakers could
use compute to facilitate regulatory visibility of AI, allocate resources to promote beneficial outcomes, and enforce
restrictions against irresponsible or malicious AI development and usage. However, while compute-based policies
and technologies have the potential to assist in these areas, there is significant variation in their readiness for
implementation. Some ideas are currently being piloted, while others are hindered by the need for fundamental
research. Furthermore, naïve or poorly scoped approaches to compute governance carry significant risks in areas
like privacy, economic impacts, and centralization of power. We end by suggesting guardrails to minimize these
risks from compute governance.
Each author contributed ideas and /or writing to the paper. However, being an author does not imply agreement with every claim made in
the paper, nor does it represent an endorsement from any author’s respective organization.
∗Denotes primary authors, who contributed most significantly to the direction and content of the paper. Both primary authors and other
authors are listed in approximately descending order of contribution.
†Indicates the corresponding authors: Girish Sastry (girish@openai.com), Lennart Heim (lennart.heim@governance.ai), and Haydn
Belfield (hb492@cam.ac.uk). Figures can be accessed at https://github.com/lheim/CPGAI-Figures .
1arXiv:2402.08797v1 [cs.CY] 13 Feb 2024 |
2005.04613.pdf | arXiv:2005.04613v1 [cs.CV] 10 May 2020Variational Clustering: Leveraging Variational
Autoencoders for Image Clustering
Vignesh Prasad*
TU Darmstadt
Germany
vignesh.prasad@tu-darmstadt.deDipanjan Das*
Embedded Systems and Robotics
TCS Innovation Labs , Kolkata, India
dipanjan.da@tcs.comBrojeshwar Bhowmick
Embedded Systems and Robotics
TCS Innovation Labs , Kolkata, India
b.bhowmick@tcs.com
Abstract —Recent advances in deep learning have shown their
ability to learn strong feature representations for images . The
task of image clustering naturally requires good feature re p-
resentations to capture the distribution of the data and sub se-
quently differentiate data points from one another. Often t hese
two aspects are dealt with independently and thus tradition al
feature learning alone does not suffice in partitioning the d ata
meaningfully. Variational Autoencoders (V AEs) naturally lend
themselves to learning data distributions in a latent space . Since
we wish to efficiently discriminate between different clust ers in
the data, we propose a method based on V AEs where we use a
Gaussian Mixture prior to help cluster the images accuratel y. We
jointly learn the parameters of both the prior and the poster ior
distributions. Our method represents a true Gaussian Mixtu re
V AE. This way, our method simultaneously learns a prior that
captures the latent distribution of the images and a posteri or
to help discriminate well between data points. We also propo se
a novel reparametrization of the latent space consisting of a
mixture of discrete and continuous variables. One key takea way
is that our method generalizes better across different data sets
without using any pre-training or learnt models, unlike exi sting
methods, allowing it to be trained from scratch in an end-to- end
manner. We verify our efficacy and generalizability experim en-
tally by achieving state-of-the-art results among unsuper vised
methods on a variety of datasets. To the best of our knowledge ,
we are the first to pursue image clustering using V AEs in a pure ly
unsupervised manner on real image datasets.
Index Terms —Unsupervised Learning, Clustering, Variational
Inference
I. I NTRODUCTION
Image Clustering is a fundamental, challenging and widely
studied problem in machine learning [3]–[8]. with variety o f
applications in image retrieval [9], fast 3D reconstructio ns [10]
[11] [12] etc. Some classical examples are K-means [13],
Gaussian Mixture Models [14] and Spectral clustering [6]
which are promising, but require a robust feature represen-
tation for good clustering. In recent years, Deep Learning h as
made huge progress in learning robust feature representati ons
of images. These learned representations help cluster the d ata
more accurately when used with traditional methods like K-
means for example [15]. One way to use deep representations,
off the shelf, is to extract the feature representation of an
image from a pre-trained model and use them directly in any
This work was done when Vignesh Prasad worked at TCS Innovati on Labs.
* - Equal Contributionclustering algorithm [16]. The problem with such approache s
is that they don’t fully exploit the power of deep neural
networks. Song et al. [17] learn a representation to accurately
cluster the images in the dataset by integrating K-means
into the bottleneck layer of an Autoencoder. This associati on
enables the model to learn a meaningful clustering-oriente d
representation.
With the motivation to pursue a robust and generalizable
methodology in a principled way, we aim to make inferences
in a latent space learned specifically for a clustering task. The
idea is that it would be easier to group the data in this space,
compared to an arbitrary space defined by pre-trained featur es.
Off late, the use of generative methods for clustering has be en
on the rise as their expressive power helps efficiently captu re,
represent and recreate sampled data points. As we wish to
experiment with data distributions in a latent space that ca n
accurately represent the input data, the paradigm of Variat ional
Autoencoders (V AEs) lends itself directly to the task at han d.
We build on the ideas of GMV AE [1] and VaDE [2]
addressing their fallacies while maintaining the underlyi ng
motivation of using a Gaussian Mixture Model as the latent
space distribution. Instead of deriving the prior from a ran dom
variable, as in GMV AE, our prior is deterministic. This is
similar to VaDE however, we learn the parameters for the prio r
and posterior jointly, unlike in VaDE which uses a pre-train ing
phase to initialize the parameters of the prior.
To illustrate the differences between our process, GVMAE
[1] and VaDE [2] we visualize the graphical models in Fig. 1.
In GMV AE, the Gaussian prior z2depends on a noise variable
z1& varies for a given cluster, as shown in Fig. 1a. Ours is
more intuitive as it depends only on the cluster, as shown in
Fig. 1c. Secondly, GMV AE expresses the categorical posteri or
q(k|z1,z2)with the prior pβ(z2|z1,k)using Bayes’ rule. This
applies to VaDE too, along with fixing the GMM prior during
pre-training. We learn it during training, giving more flexi ble
learning, the effectiveness of which is seen in the results i n
Table I. This can also be seen on a toy dataset, compared to
GMV AE, where our method learns a more compact cluster
representation as compared to GMV AE, as shown in Fig. 3.
Our method is more principled as we directly learn the
cluster assignment probabilities q(k|z)instead of performing
a Bayesian classification, as done in both GMV AE and VaDE.
Once the cluster predictions q(k|z)become closer to a one- |
10.1145.3600006.3613165.pdf | Efficient Memory Management for Large Language
Model Serving with PagedAttention
Woosuk Kwon1,∗Zhuohan Li1,∗Siyuan Zhuang1Ying Sheng1,2Lianmin Zheng1Cody Hao Yu3
Joseph E. Gonzalez1Hao Zhang4Ion Stoica1
1UC Berkeley2Stanford University3Independent Researcher4UC San Diego
Abstract
High throughput serving of large language models (LLMs)
requires batching sufficiently many requests at a time. How-
ever, existing systems struggle because the key-value cache
(KV cache) memory for each request is huge and grows
and shrinks dynamically. When managed inefficiently, this
memory can be significantly wasted by fragmentation and
redundant duplication, limiting the batch size. To address
this problem, we propose PagedAttention, an attention al-
gorithm inspired by the classical virtual memory and pag-
ing techniques in operating systems. On top of it, we build
vLLM, an LLM serving system that achieves (1) near-zero
waste in KV cache memory and (2) flexible sharing of KV
cache within and across requests to further reduce mem-
ory usage. Our evaluations show that vLLM improves the
throughput of popular LLMs by 2-4 ×with the same level
of latency compared to the state-of-the-art systems, such
as FasterTransformer and Orca. The improvement is more
pronounced with longer sequences, larger models, and more
complex decoding algorithms. vLLM’s source code is publicly
available at https://github.com/vllm-project/vllm .
1 Introduction
The emergence of large language models ( LLMs ) like GPT [ 5,
37] and PaLM [ 9] have enabled new applications such as pro-
gramming assistants [ 6,18] and universal chatbots [ 19,35]
that are starting to profoundly impact our work and daily
routines. Many cloud companies [ 34,44] are racing to pro-
vide these applications as hosted services. However, running
these applications is very expensive, requiring a large num-
ber of hardware accelerators such as GPUs. According to
recent estimates, processing an LLM request can be 10 ×more
expensive than a traditional keyword query [ 43]. Given these
high costs, increasing the throughput—and hence reducing
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for third-
party components of this work must be honored. For all other uses, contact
the owner/author(s).
SOSP ’23, October 23–26, 2023, Koblenz, Germany
©2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0229-7/23/10.
https://doi.org/10.1145/3600006.3613165
NVIDIA A100 40GBParameters (26GB, 65%)KV Cache(>30%)Others
203040Memory usage (GB)
Parameter sizeExisting systems vLLM
0 10 20 30 40
Batch size (# requests)00.4k0.8k1.2kThroughput (token/s)
Figure 1. Left: Memory layout when serving an LLM with
13B parameters on NVIDIA A100. The parameters (gray)
persist in GPU memory throughout serving. The memory
for the KV cache (red) is (de)allocated per serving request.
A small amount of memory (yellow) is used ephemerally
for activation. Right: vLLM smooths out the rapid growth
curve of KV cache memory seen in existing systems [ 31,60],
leading to a notable boost in serving throughput.
the cost per request—of LLM serving systems is becoming
more important.
At the core of LLMs lies an autoregressive Transformer
model [ 53]. This model generates words (tokens), one at a
time, based on the input (prompt) and the previous sequence
of the output’s tokens it has generated so far. For each re-
quest, this expensive process is repeated until the model out-
puts a termination token. This sequential generation process
makes the workload memory-bound , underutilizing the com-
putation power of GPUs and limiting the serving throughput.
Improving the throughput is possible by batching multi-
ple requests together. However, to process many requests
in a batch, the memory space for each request should be
efficiently managed. For example, Fig. 1 (left) illustrates the
memory distribution for a 13B-parameter LLM on an NVIDIA
A100 GPU with 40GB RAM. Approximately 65% of the mem-
ory is allocated for the model weights, which remain static
during serving. Close to 30% of the memory is used to store
the dynamic states of the requests. For Transformers, these
states consist of the key and value tensors associated with the
attention mechanism, commonly referred to as KV cache [41],
which represent the context from earlier tokens to gener-
ate new output tokens in sequence. The remaining small
∗Equal contribution.
1arXiv:2309.06180v1 [cs.LG] 12 Sep 2023 |
2304.06174.pdf | Accurate transition state generation with an object-aware
equivariant elementary reaction diffusion model
Chenru Duan1, 2, *, Yuanqi Du3, Haojun Jia1, 2, and Heather J. Kulik1, 2
1Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA, 02139
2Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139
3Department of Computer Science, Cornell University, Ithaca, NY , 14850
*Corresponding to: duanchenru@gmail.com
Abstract
Transition state (TS) search is key in chemistry for elucidating reaction mechanisms and exploring
reaction networks. The search for accurate 3D TS structures, however, requires numerous com-
putationally intensive quantum chemistry calculations due to the complexity of potential energy
surfaces. Here, we developed an object-aware SE(3) equivariant diffusion model that satisfies all
physical symmetries and constraints for generating sets of structures – reactant, TS, and product –
in an elementary reaction. Provided reactant and product, this model generates a TS structure in
seconds instead of hours required when performing quantum chemistry-based optimizations. The
generated TS structures achieve a median of 0.08 Å root mean square deviation compared to the
true TS. With a confidence scoring model for uncertainty quantification, we approach an accuracy
required for reaction rate estimation (2.6 kcal/mol) by only performing quantum chemistry-based
optimizations on 14% of the most challenging reactions. We envision the proposed approach useful
in constructing large reaction networks with unknown mechanisms.
Introduction
Breaking down complex chemical reactions into their constituent elementary reactions is key for understanding
reaction mechanisms and designing processes that favor target reaction pathways.1–3Due to the transient nature
of the intermediate and transition state (TS) involved in these elementary reactions, it is difficult to isolate and
characterize these structures experimentally. Instead, high throughput quantum chemistry computation, e.g., with
density functional theory (DFT),4provides valuable insights on potential reaction mechanisms by constructing
comprehensive reaction networks.2, 5These networks are established by either iteratively enumerating potential
elementary reactions on-the-fly given existing species6, 7or propagating biased ab initio molecular dynamics followed
by elementary reaction refinement.8–10Both approaches, however, require a tremendous number of quantum chemistry
calculations due to the large number of species potentially involved in a chemical reaction.11–13
Among all DFT energy evaluations, the overwhelming majority comes from locating an accurate TS structure
solely based on reactant and product information.2, 3Nonetheless, obtaining these TS structures is vital for estimating
reaction rates and determining dominant reaction pathways in a reaction network. Conventional TS search algorithms
(e.g., nudged elastic band14) are computationally intensive and notorious for their difficulty in convergence,15yielding
Preprint. Under review.arXiv:2304.06174v2 [physics.chem-ph] 17 Apr 2023 |
2207.05221.pdf | Language Models (Mostly) Know What They Know
Saurav Kadavath∗, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez,
Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston,
Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai,
Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson,
Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson,
Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph,
Ben Mann, Sam McCandlish, Chris Olah, Jared Kaplan∗
Anthropic
Abstract
We study whether language models can evaluate the validity of their own claims and predict
which questions they will be able to answer correctly. We first show that larger models are
well-calibrated on diverse multiple choice and true/false questions when they are provided
in the right format. Thus we can approach self-evaluation on open-ended sampling tasks
by asking models to first propose answers, and then to evaluate the probability "P(True)"
that their answers are correct. We find encouraging performance, calibration, and scaling
for P(True) on a diverse array of tasks. Performance at self-evaluation further improves
when we allow models to consider many of their own samples before predicting the va-
lidity of one specific possibility. Next, we investigate whether models can be trained to
predict "P(IK)", the probability that "I know" the answer to a question, without reference
to any particular proposed answer. Models perform well at predicting P(IK) and partially
generalize across tasks, though they struggle with calibration of P(IK) on new tasks. The
predicted P(IK) probabilities also increase appropriately in the presence of relevant source
materials in the context, and in the presence of hints towards the solution of mathematical
word problems. We hope these observations lay the groundwork for training more honest
models, and for investigating how honesty generalizes to cases where models are trained
on objectives other than the imitation of human writing.
∗Correspondence to: {saurav, jared}@anthropic.com
Author contributions are listed at the end of the paper.arXiv:2207.05221v4 [cs.CL] 21 Nov 2022 |
10.1016.j.cell.2023.04.032.pdf | Article
RNA recoding in cephalopods tailors microtubule
motor protein function
Graphical abstract
Highlights
dRNA editing in squid specifies unique kinesin protein variants
in different tissues
dUnique kinesin variants are made acutely in response toseawater temperature
dCold-specific kinesin variants have enhanced singlemolecule motility in the cold
dCephalopod editomes can reveal functional substitutions innon-cephalopod proteinsAuthors
Kavita J. Rangan,Samara L. Reck-Peterson
Correspondence
krangan@health.ucsd.edu (K.J.R.),
sreckpeterson@health.ucsd.edu(S.L.R.-P.)
In brief
RNA recoding in squid specifies unique
kinesin variants with distinct activities indifferent tissues and in response tochanges in seawater temperature, andcephalopod recoding sites provide aguide to identifying functionalsubstitutions in non-cephalopod motorproteins.
Rangan & Reck-Peterson, 2023, Cell 186, 2531–2543
June 8, 2023 ª2023 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.04.032 ll
|
2310.17680.pdf | CODEFUSION : A Pre-trained Diffusion Model for Code Generation
Mukul Singh
Microsoft
Delhi, IndiaJosé Cambronero
Sumit Gulwani
Vu Le
Microsoft
Redmond, USCarina Negreanu
Microsoft Research
Cambridge, UKGust Verbruggen
Microsoft
Keerbergen, Belgium
Abstract
Imagine a developer who can only change their
last line of code—how often would they have
to start writing a function from scratch before
it is correct? Auto-regressive models for code
generation from natural language have a similar
limitation: they do not easily allow reconsid-
ering earlier tokens generated. We introduce
CODEFUSION , a pre-trained diffusion code gen-
eration model that addresses this limitation by
iteratively denoising a complete program con-
ditioned on the encoded natural language. We
evaluate CODEFUSION on the task of natural
language to code generation for Bash, Python,
and Microsoft Excel conditional formatting
(CF) rules. Experiments show that CODEFU -
SION (75M parameters) performs on par with
state-of-the-art auto-regressive systems (350M–
175B parameters) in top-1 accuracy and outper-
forms them in top-3 and top-5 accuracy, due to
its better balance in diversity versus quality.
1 Introduction
Auto-regressive code generation models (Wang
et al., 2021; Brown et al., 2020; Scholak et al.,
2021; Feng et al., 2020; Fried et al., 2022) can-
not easily reconsider tokens generated earlier in
the decoding process. This limitation can lead to
lower diversity generations (Lin et al., 2023) in
the related domain of text. To balance diversity
and quality of candidates generated, prior work
has explored decoding strategies such as grouped
beam search (Vijayakumar et al., 2018) or nucleus
sampling (Holtzman et al., 2019).
Diffusion models, which have shown remark-
able performance in image generation (Dhariwal
and Nichol, 2021), have recently been extended
to generate diverse text (Li et al., 2022; Lin et al.,
2023). These approaches use an embedding layer
to convert discrete tokens to continuous embed-
dings, where Gaussian noise can be added and pre-
dicted, to imitate the diffusion process. To map
denoised embeddings back to discrete text, theseapproaches then select the vocabulary token with
the closest embedding. In the code domain, where
there are many syntactic and semantic constraints
between tokens, independently projecting embed-
dings back to tokens can yield invalid programs.
We propose CODEFUSION , a natural language
to code (NL-to-code) model that combines an
encoder-decoder architecture (Raffel et al., 2020)
with a diffusion process. The encoder maps the
NL into a continuous representation, which is used
by the diffusion model as an additional condition
for denoising random Gaussian noise input. To
generate syntactically correct code, we then feed
the denoised embeddings to a transformer decoder,
with full self-attention and cross attention with the
embedded utterance, to obtain probability distribu-
tions over code tokens. Finally, we select the token
with the highest probability at each index.
To pre-train CODEFUSION for code generation,
we extend the continuous paragraph denoising
(CPD) task introduced in Lin et al. (2023) to the
code domain. Specifically, we only apply noise
to tokens that correspond to identifiers in code or
to built-in keywords in the target language. This
denoising task allows the model to learn relations
between critical code tokens (like variable names,
function names and control flow built-ins).
We find that CODEFUSION yields more diverse
code (higher n-gram fraction, lower embedding
similarity, and higher edit distance) than auto-
regressive models (see Table 2). The CPD objec-
tive, which biases the model towards learning to
remove noise in a context-aware fashion, paired
with a decoder that has access to the full denoised
representation, jointly lead CODEFUSION to pro-
duce 48.5% more syntactically correct generations
(averaged over three languages) when compared to
GENIE , a text diffusion model (Table 3).
We evaluate CODEFUSION on NL-to-code for
three different languages: Python (Yin et al., 2018),
Bash (Lin et al., 2018), and conditional formattingarXiv:2310.17680v1 [cs.SE] 26 Oct 2023 |
2002.05227.pdf | Variational Autoencoders with Riemannian Brownian Motion Priors
Dimitris Kalatzis1David Eklund2Georgios Arvanitidis3Søren Hauberg1
Abstract
Variational Autoencoders (V AEs) represent the
given data in a low-dimensional latent space,
which is generally assumed to be Euclidean. This
assumption naturally leads to the common choice
of a standard Gaussian prior over continuous la-
tent variables. Recent work has, however, shown
that this prior has a detrimental effect on model
capacity, leading to subpar performance. We pro-
pose that the Euclidean assumption lies at the
heart of this failure mode. To counter this, we as-
sume a Riemannian structure over the latent space,
which constitutes a more principled geometric
view of the latent codes, and replace the stan-
dard Gaussian prior with a Riemannian Brownian
motion prior. We propose an efficient inference
scheme that does not rely on the unknown normal-
izing factor of this prior. Finally, we demonstrate
that this prior significantly increases model capac-
ity using only one additional scalar parameter.
1. Introduction
Variational autoencoders (V AEs) (Kingma & Welling, 2014;
Rezende et al., 2014) simultaneously learn a conditional
densityp(x|z)of high dimensional observations and low
dimensional representations zgiving rise to these observa-
tions. In V AEs, a prior distribution p(z)is assigned to the
latent variables which is typically a standard Gaussian. It
has, unfortunately, turned out that this choice of distribution
is limiting the modelling capacity of V AEs and richer priors
have been proposed instead (Tomczak & Welling, 2017;
van den Oord et al., 2017; Bauer & Mnih, 2018; Klushyn
et al., 2019). In contrast to this popular view, we will ar-
gue that the limitations of the prior are not due to lack of
1Section for Cognitive Systems, Department of Applied Mathe-
matics and Computer Science, Technical University of Denmark
2Research Institutes of Sweden, Isafjordsgatan 22, 164 40 Kista,
Sweden3Empirical Inference Department, Max Planck Institute
for Intelligent Systems, T ¨ubingen, Germany. Correspondence to:
Dimitris Kalatzis <dika@dtu.dk >.
Proceedings of the 37thInternational Conference on Machine
Learning , Vienna, Austria, PMLR 119, 2020. Copyright 2020 by
the author(s).
Figure 1. The latent space priors of two V AEs trained on the digit
1from MNIST. Left: Using a unit Gaussian prior. Right: Us-
ing a Riemannian Brownian motion (ours) with trainable (scalar)
variance.
capacity , but rather lack of principle .
Informally, the Gaussian prior has two key problems.
1. The Euclidean representation is arbitrary. Behind
the Gaussian prior lies the assumption that the latent space
Zis Euclidean. However, if the decoder pθ(x|z)is of suf-
ficiently high capacity, then it is always possible to repa-
rameterize the latent space from ztoh(z),h:Z→Z , and
then let the decoder invert this reparameterization as part
of its decoding process (Arvanitidis et al., 2018; Hauberg,
2018b). This implies that we cannot assign any meaning
to specific instantiations of the latent variables, and that
Euclidean distances carry limited meaning in Z. This is an
identifiability problem and it is well-known that even the
most elementary latent variable models are subject to such.
For example, Gaussian mixtures can be reparameterized by
permuting cluster indices, and principal components can be
arbitrarily rotated (Bishop, 2006).
2. Latent manifolds are mismapped onto Z. In all but
the simplest cases, the latent manifold Mgiving rise to data
observations is embedded in Z. An encoder with adequate
capacity will always recover some smoothened form of M,
which will either result in the latent space containing “holes”
of low density or, in Mbeing mapped to the whole of Z
under the influence of the prior. Both cases will lead to
bad samples or convergence problems. This problem is
called manifold mismatch (Davidson et al., 2018; Falorsi
et al., 2018) and is closely related to distribution mismatch
(Hoffman & Johnson, 2016; Bauer & Mnih, 2018; Rosca
et al., 2018) where the prior samples from regions to which
the variational posterior (or encoder) does not assign any
density. A graphical illustration of this situation can bearXiv:2002.05227v3 [cs.LG] 7 Aug 2020 |
10.1038.s41593-023-01304-9.pdf | Nature Neuroscience | Volume 26 | May 2023 | 858–866 858
nature neuroscience
Articlehttps://doi.org/10.1038/s41593-023-01304-9
Semantic reconstruction of continuous
language from non-invasive brain recordings
Jerry Tang1, Amanda LeBel 2, Shailee Jain 1 & Alexander G. Huth 1,2
A brain–computer interface that decodes continuous language from
non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.
Previous brain–computer interfaces have demonstrated that speech
articulation1 and other signals2 can be decoded from intracranial
recordings to restore communication to people who have lost the ability
to speak3,4. Although effective, these decoders require invasive neuro -
surgery, making them unsuitable for most other uses. Language decod -
ers that use non-invasive recordings could be more widely adopted and
have the potential to be used for both restorative and augmentative
applications. Non-invasive brain recordings can capture many kinds of
linguistic information5–8, but previous attempts to decode this informa -
tion have been limited to identifying one output from among a small
set of possibilities9–12, leaving it unclear whether current non-invasive
recordings have the spatial and temporal resolution required to decode
continuous language.
Here we introduce a decoder that takes non-invasive brain record -
ings made using functional magnetic resonance imaging (fMRI) and
reconstructs perceived or imagined stimuli using continuous natural
language. To accomplish this, we needed to overcome one major obsta -
cle: the low temporal resolution of fMRI. Although fMRI has excellent
spatial specificity, the blood-oxygen-level-dependent (BOLD) signal
that it measures is notoriously slow—an impulse of neural activity causes BOLD to rise and fall over approximately 10 s (ref. 13). For natu -
rally spoken English (over two words per second), this means that each
brain image can be affected by over 20 words. Decoding continuous
language thus requires solving an ill-posed inverse problem, as there
are many more words to decode than brain images. Our decoder accom -
plishes this by generating candidate word sequences, scoring the
likelihood that each candidate evoked the recorded brain responses
and then selecting the best candidate.
To compare word sequences to a subject’s brain responses, we
used an encoding model5 that predicts how the subject’s brain responds
to natural language. We recorded brain responses while the subject
listened to 16 h of naturally spoken narrative stories, yielding over five
times more data than the typical language fMRI experiment. We trained
the encoding model on this dataset by extracting semantic features that
capture the meaning of stimulus phrases8,14–17 and using linear regres-
sion to model how the semantic features influence brain responses
(Fig. 1a). Given any word sequence, the encoding model predicts how
the subject’s brain would respond when hearing the sequence with
considerable accuracy (Extended Data Fig. 1). The encoding model can
then score the likelihood that the word sequence evoked the recorded Received: 1 April 2022
Accepted: 15 March 2023
Published online: 1 May 2023
Check for updates
1Department of Computer Science, The University of Texas at Austin, Austin, TX, USA. 2Department of Neuroscience, The University of Texas at Austin,
Austin, TX, USA. e-mail: huth@cs.utexas.edu |
2004.10188.pdf | Journal of Machine Learning Research 21 (2020) 1-41 Submitted 4/20; Revised 10/20; Published 11/20
Residual Energy-Based Models for Text
Anton Bakhtin∗∗♦Yuntian Deng∗⋆Sam Gross♦Myle Ott♦
Marc’Aurelio Ranzato♦Arthur Szlam♦
{yolo,sgross,myleott,ranzato,aszlam}@fb.com dengyuntian@seas.harvard.edu
♦Facebook AI Research⋆Harvard University
770 Broadway, New York, NY 10003 33 Oxford St., Cambridge, MA 02138
Editor: Samy Bengio
Abstract
Current large-scale auto-regressive language models (Radford et al., 2019; Liu et al., 2018;
Graves, 2013) display impressive fluency and can generate convincing text. In this work we
start by asking the question: Can the generations of these models be reliably distinguished
from real text by statistical discriminators? We find experimentally that the answer is
affirmative when we have access to the training data for the model, and guardedly affirmative
even if we do not.
This suggests that the auto-regressive models can be improved by incorporating the
(globally normalized) discriminators into the generative process. We give a formalism for
this using the Energy-Based Model framework, and show that it indeed improves the results
of the generative models, measured both in terms of perplexity and in terms of human
evaluation.
Keywords: energy-basedmodels, textgeneration, negativesampling, importancesampling,
generalization, real/fake discrimination
1. Introduction
Energy-based models (EBMs) have a long history in machine learning (Hopfield, 1982; Hinton,
2002a; LeCun et al., 2006), especially in the image domain (Teh et al., 2003; Ranzato et al.,
2013). Their appeal stems from the minimal assumptions they make about the generative
process of the data: they are a strict generalization of probability models, as the energy
function need not be normalized or even have convergent integral. Recent works (Du and
Mordatch, 2019) have demonstrated that they can achieve excellent performance as generative
models. However, despite several promising efforts (Rosenfeld et al., 2001; Wang et al., 2015,
2017; Wang and Ou, 2017, 2018a), they still have not been as successful in the text domain
as locally-normalized auto-regressive models (Radford et al., 2019), which generate each word
sequentially conditioned on all previous words (such that the probabilities are normalized
per word, hence the name “locally-normalized”). This formulation enables locally-normalized
auto-regressive models to be trained efficiently via maximum likelihood and generate samples
of remarkable quality.
Nevertheless, in the text domain, local normalization and auto-regression leave room for
improvement. For example, at training time, standard neural language models (LMs) are
∗Equal contribution. Corresponding author: Marc’Aurelio Ranzato ( ranzato@fb.com ).
©2020 Anton Bakhtin, Yuntian Deng, Sam Gross, Myle Ott, Marc’Aurelio Ranzato, Arthur Szlam.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/ . Attribution requirements are provided
athttp://jmlr.org/papers/v21/20-326.html .arXiv:2004.10188v2 [cs.CL] 21 Dec 2020 |
2309.03409.pdf | LARGE LANGUAGE MODELS AS OPTIMIZERS
Chengrun Yang*Xuezhi Wang Yifeng Lu Hanxiao Liu
Quoc V . Le Denny Zhou Xinyun Chen*
Google DeepMind*Equal contribution
ABSTRACT
Optimization is ubiquitous. While derivative-based algorithms have been powerful
tools for various problems, the absence of gradient imposes challenges on many
real-world applications. In this work, we propose Optimization by PROmpting
(OPRO), a simple and effective approach to leverage large language models (LLMs)
as optimizers, where the optimization task is described in natural language. In
each optimization step, the LLM generates new solutions from the prompt that
contains previously generated solutions with their values, then the new solutions
are evaluated and added to the prompt for the next optimization step. We first
showcase OPRO on linear regression and traveling salesman problems, then move
on to prompt optimization where the goal is to find instructions that maximize
the task accuracy. With a variety of LLMs, we demonstrate that the best prompts
optimized by OPRO outperform human-designed prompts by up to 8%on GSM8K,
and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/
google-deepmind/opro .
0 50 100 150
# steps50.060.070.080.0training accuracy
GSM8K
(a) GSM8K
0 50 100 150 200
# steps60.080.0100.0training accuracy
BBH
movie_recommendation (b) BBH movie_recommendation
Figure 1: Prompt optimization on GSM8K (Cobbe et al., 2021) and BBH (Suzgun et al., 2022)
movie_recommendation. The optimization on GSM8K has pre-trained PaLM 2-L as the scorer and
the instruction-tuned PaLM 2-L (denoted PaLM 2-L-IT ) as the optimizer; the optimization on
BBH movie_recommendation has text-bison as the scorer and PaLM 2-L-IT as the optimizer.
Each dot is the average accuracy across all (up to 8) generated instructions in the single step, and the
shaded region represents standard deviation. See Section 5 for more details on experimental setup.
Table 1: Top instructions with the highest GSM8K zero-shot test accuracies from prompt optimization
with different optimizer LLMs. All results use the pre-trained PaLM 2-L as the scorer.
Source Instruction Acc
Baselines
(Kojima et al., 2022) Let’s think step by step. 71.8
(Zhou et al., 2022b) Let’s work this out in a step by step way to be sure we have the right answer. 58.8
(empty string) 34.0
Ours
PaLM 2-L-IT Take a deep breath and work on this problem step-by-step. 80.2
PaLM 2-L Break this down. 79.9
gpt-3.5-turbo A little bit of arithmetic and a logical approach will help us quickly arrive at
the solution to this problem.78.5
gpt-4 Let’s combine our numerical command and clear thinking to quickly and
accurately decipher the answer.74.5
1arXiv:2309.03409v2 [cs.LG] 7 Dec 2023 |
1810.08575.pdf | Supervising strong learners
by amplifying weak experts
Paul Christiano
OpenAI
paul@openai.comBuck Shlegeris∗
bshlegeris@gmail.comDario Amodei
OpenAI
damodei@openai.com
Abstract
Many real world learning tasks involve complex or hard-to-specify objectives, and
using an easier-to-specify proxy can lead to poor performance or misaligned be-
havior. One solution is to have humans provide a training signal by demonstrating
or judging performance, but this approach fails if the task is too complicated for a
human to directly evaluate. We propose Iterated Amplification, an alternative train-
ing strategy which progressively builds up a training signal for difficult problems
by combining solutions to easier subproblems. Iterated Amplification is closely
related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017b), except that it
uses no external reward function. We present results in algorithmic environments,
showing that Iterated Amplification can efficiently learn complex behaviors.
1 Introduction
If we want to train an ML system to perform a task, we need to be able to evaluate how well it is
doing. Whether our training signal takes the form of labels, rewards, or something else entirely, we
need some way to generate that signal.
If our goal can be evaluated automatically, such as winning a game of Go, or if we have an algorithm
that can generate examples of correct behavior, then generating a training signal is trivial. In these
cases we might say that there is an “algorithmic” training signal.
Unfortunately, most useful tasks don’t have an algorithmic training signal. So in current applications
of machine learning, humans often provide the training signal. This can be done by having a human
demonstrate the task, for example labeling an image or teleoperating a robot, or by learning a reward
function from human judgments. For these classes of tasks, we could say there is a “human” training
signal.
However, there are harder tasks for which we can’t compute demonstrations or rewards even with
human assistance, and for which we currently have no clear method to get a meaningful training
signal. Consider making economic policy decisions, advancing the scientific frontier, or managing the
security of a large network of computers. Some of these tasks are “beyond human scale” – a single
human can’t perform them and can’t make sense of their massive observation space well enough to
judge the behavior of an agent. It may be possible for a human to judge performance in the very long
run (for example, by looking at economic growth over several years), but such long-term feedback is
very slow to learn from. We currently have no way to learn how to perform such tasks much better
than a human.
The overall situation is depicted in Table 1, which shows six different combinations of training signal
source and problem formulation (supervised learning or RL). The bulk of ML practice operates in
the top center box (supervised learning from human labels), the bottom left box (RL with a scripted
reward), and sometimes the top left box (supervised learning of algorithms). The bottom center box
∗Work done while at OpenAI.arXiv:1810.08575v1 [cs.LG] 19 Oct 2018 |
2402.06627.pdf | Feedback Loops With Language Models Drive
In-Context Reward Hacking
Alexander Pan
UC Berkeley
aypan.17@berkeley.eduErik Jones
UC Berkeley
erjones@berkeley.eduMeena Jagadeesan
UC Berkeley
mjagadeesan@berkeley.edu
Jacob Steinhardt
UC Berkeley
jsteinhardt@berkeley.edu
Abstract
Language models influence the external world: they query APIs that read and
write to web pages, generate content that shapes human behavior, and run system
commands as autonomous agents. These interactions form feedback loops : LLM
outputs affect the world, which in turn affect subsequent LLM outputs. In this
work, we show that feedback loops can cause in-context reward hacking (ICRH),
where the LLM at test-time optimizes a (potentially implicit) objective but creates
negative side effects in the process. For example, consider an LLM agent posting
tweets with the objective of maximizing Twitter engagement; the LLM may retrieve
its previous tweets into the context window and make its subsequent tweets more
controversial, increasing engagement but also toxicity. We identify and study two
processes that lead to ICRH: output-refinement andpolicy-refinement . For these
processes, evaluations on static datasets are insufficient—they miss the feedback
effects and thus cannot capture the most harmful behavior. In response, we provide
three recommendations for evaluation to capture more instances of ICRH. As AI
development accelerates, the effects of feedback loops will proliferate, increasing
the need to understand their role in shaping LLM behavior.
1 Introduction
Language models are increasingly influencing the real world. As demand for AI applications
accelerates [Benaich et al., 2023], developers are beginning to augment language models (LLMs)
with the ability to call external APIs during inference [Mialon et al., 2023], retrieve documents [Jiang
et al., 2023], execute code [Zhou et al., 2023a], and act as autonomous agents [Richards, 2023].
LLMs that interact with the world induce feedback loops : the previous outputs affect the world
state, which in turn shapes subsequent outputs (Figure 1). For example, Microsoft’s Sydney chat
bot (the LLM) interacts with Twitter (the world) by searching through Twitter and placing tweets
into its context window. This interaction induced a feedback loop when a user jailbroke Sydney
(previous output) and tweeted about it; in a later dialog with the same user, Sydney retrieved the
tweet and became hostile (subsequent output) [Perrigo, 2023]. As LLMs are given greater access to
tools [OpenAI, 2023b] and deployed in more settings [Grant, 2023], feedback loops will become
ubiquitous [Bottou et al., 2013].
In this work, we examine how feedback loops unexpectedly induce optimization in the world-LLM
system. Conceptually, when LLMs are deployed with a objective (a goal in natural language), each
cycle in the feedback loop provides the LLM with an additional step of computation on the objective.
The LLM may use the computation to improve previous outputs (Experiment 1), adjust its policyarXiv:2402.06627v1 [cs.LG] 9 Feb 2024 |
2308.13731.pdf | Learning variational autoencoders via MCMC speed measures
Marcel Hirt1†, Vasileios Kreouzis2†, Petros Dellaportas2,3*
1School of Social Sciences & School of Physical and Mathematical Sciences, Nanyang
Technological University, Singapore.
2*Department of Statistical Science, University College London, UK.
3Department of Statistics, Athens University of Economics and Business, Greece.
*Corresponding author(s). E-mail(s): p.dellaportas@ucl.ac.uk;
†These authors contributed equally to this work.
Abstract
Variational autoencoders (VAEs) are popular likelihood-based generative models which can be effi-
ciently trained by maximizing an Evidence Lower Bound (ELBO). There has been much progress in
improving the expressiveness of the variational distribution to obtain tighter variational bounds and
increased generative performance. Whilst previous work has leveraged Markov chain Monte Carlo
(MCMC) methods for the construction of variational densities, gradient-based methods for adapt-
ing the proposal distributions for deep latent variable models have received less attention. This work
suggests an entropy-based adaptation for a short-run Metropolis-adjusted Langevin (MALA) or Hamil-
tonian Monte Carlo (HMC) chain while optimising a tighter variational bound to the log-evidence.
Experiments show that this approach yields higher held-out log-likelihoods as well as improved gener-
ative metrics. Our implicit variational density can adapt to complicated posterior geometries of latent
hierarchical representations arising in hierarchical VAEs.
Keywords: Generative Models,Variational Autoencoders, Adaptive Markov Chain Monte Carlo, Hierarchical
Models
1 Introduction
VAEs (Kingma and Welling, 2014; Rezende et al,
2014) are powerful latent variable models that
routinely use neural networks to parameterise
conditional distributions of observations given a
latent representation. This renders the Maximum-
Likelihood Estimation (MLE) of such models
intractable, so one commonly resorts to extensions
of Expectation-Maximization (EM) approaches
that maximize a lower bound on the data log-
likelihood. These objectives introduce a varia-
tional or encoding distribution of the latent vari-
ables that approximates the true posterior distri-
bution of the latent variable given the observation.However, VAEs have shortcomings; for exam-
ple, they can struggle to generate high-quality
images. These shortcomings have been attributed
to failures to match corresponding distributions
in the latent space. First, the VAE prior can be
significantly different from the aggregated approx-
imate posterior (Hoffman and Johnson, 2016;
Rosca et al, 2018). To alleviate this prior hole
phenomenon, previous work has considered more
flexible priors, such as mixtures (Tomczak and
Welling, 2017), normalising flows (Kingma et al,
2016), hierarchical priors (Klushyn et al, 2019)
or energy-based models (Du and Mordatch, 2019;
1arXiv:2308.13731v1 [stat.ML] 26 Aug 2023 |
2311.11045.pdf | Orca 2: Teaching Small Language Models
How to Reason
Arindam Mitra, Luciano Del Corro†, Shweti Mahajan†, Andres Codas‡
Clarisse Simoes‡, Sahaj Agrawal, Xuxi Chen∗, Anastasia Razdaibiedina∗
Erik Jones∗, Kriti Aggarwal∗, Hamid Palangi, Guoqing Zheng
Corby Rosset, Hamed Khanpour, Ahmed Awadallah
Microsoft Research
Abstract
Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform
conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval.
In Orca 2, we continue exploring how improved training signals can enhance smaller LMs’
reasoning abilities. Research on training small LMs has often relied on imitation learning
to replicate the output of more capable models. We contend that excessive emphasis on
imitation may restrict the potential of smaller models. We seek to teach small LMs to
employ different solution strategies for different tasks, potentially different from the one used
by the larger model. For example, while larger models might provide a direct answer to
a complex task, smaller models may not have the same capacity. In Orca 2, we teach the
model various reasoning techniques (step-by-step, recall then generate, recall-reason-generate,
direct answer, etc.). More crucially, we aim to help the model learn to determine the most
effective solution strategy for each task. We evaluate Orca 2 using a comprehensive set of
15 diverse benchmarks (corresponding to approximately 100 tasks and over 36,000 unique
prompts). Orca 2 significantly surpasses models of similar size and attains performance
levels similar or better to those of models 5-10x larger, as assessed on complex tasks that
test advanced reasoning abilities in zero-shot settings. We open-source Orca 2 to encourage
further research on the development, evaluation, and alignment of smaller LMs.
020406080100
AGI BBH MMLU ARC-E ARC-C RACE GSM8K Average
Orca-2-7B Orca-2-13B LLAMA-2-Chat-13B LLAMA-2-Chat-70B WizardLM-13B WizardLM-70B
Figure 1: Results comparing Orca 2 (7B & 13B) to LLaMA-2-Chat (13B & 70B) and
WizardLM (13B & 70B) on variety of benchmarks (in 0-shot setting) covering language
understanding, common sense reasoning, multi-step reasoning, math problem solving, etc.
Orca 2 models match or surpass all other models including models 5-10x larger. Note that
all models are using the same LLaMA-2 base models of the respective size.
∗work done while at Microsoft;†,‡denote equal contributions.arXiv:2311.11045v1 [cs.AI] 18 Nov 2023 |
2310.02304.pdf | SELF-TAUGHT OPTIMIZER (STOP ):
RECURSIVELY SELF-IMPROVING CODE GENERATION
Eric Zelikman1,2, Eliana Lorch, Lester Mackey1, Adam Tauman Kalai1
1Microsoft Research,2Stanford University
ABSTRACT
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided
Language Models) solve problems by providing a “scaffolding” program that struc-
tures multiple calls to language models to generate better outputs. A scaffolding
program is written in a programming language such as Python. In this work, we
use a language-model-infused scaffolding program to improve itself. We start
with a seed “improver” that improves an input program according to a given utility
function by querying a language model several times and returning the best solution.
We then run this seed improver to improve itself. Across a small set of downstream
tasks, the resulting improved improver generates programs with significantly better
performance than its seed improver. A variety of self-improvement strategies
are proposed by the language model, including beam search, genetic algorithms,
and simulated annealing. Since the language models themselves are not altered,
this is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable
of writing code that can call itself to improve itself. We consider concerns around
the development of self-improving technologies and evaluate the frequency with
which the generated code bypasses a sandbox.
1 I NTRODUCTION
A language model can be queried to optimize virtually any objective describable in natural language.
However, a program that makes multiple, structured calls to a language model can often produce
outputs with higher objective values (Yao et al., 2022; 2023; Zelikman et al., 2023; Chen et al.,
2022). We refer to these as “scaffolding” programs, typically written (by humans) in a programming
language such as Python. Our key observation is that, for any distribution over optimization problems
and any fixed language model, the design of a scaffolding program is itself an optimization problem.
In this work, we introduce the Self-Taught Optimizer (STOP), a method in which code that applies a
language model to improve arbitrary solutions is applied recursively to improve itself. Our approach
begins with an initial seed ‘improver’ scaffolding program that uses the language model to improve a
solution to some downstream task. As the system iterates, the model refines this improver program.
We use a small set of downstream algorithmic tasks to quantify the performance of our self-optimizing
framework. Our results demonstrate improvement when the model applies its self-improvement
strategies over increasing iterations. Thus, STOP shows how language models can act as their own
meta-optimizers. We additionally investigate the kinds of self-improvement strategies that the model
proposes (see Figure 1), the transferability of the proposed strategies across downstream tasks, and
explore the model’s susceptibility to unsafe self-improvement strategies.
Genetic
Algorithm Beam Search /
Tree Search Multi-Armed
Prompt Bandit Vary Temperature
to Explore Simulated-annealing
Based Search Decomposing and
Improving Parts ?
Figure 1: Example self-improvement strategies proposed and implemented by GPT-4. Each
strategy is then used as scaffolding to revise arbitrary code, including the scaffolding code itself.
1arXiv:2310.02304v1 [cs.CL] 3 Oct 2023 |
2211.15661.pdf | Published as a conference paper at ICLR 2023
WHAT LEARNING ALGORITHM IS IN -CONTEXT LEARN -
ING? INVESTIGATIONS WITH LINEAR MODELS
Ekin Aky ¨urek1,2,a.Dale Schuurmans1Jacob Andreas∗2Tengyu Ma∗1,3,bDenny Zhou∗1
1Google Research2MIT CSAIL3Stanford University∗collaborative advising
ABSTRACT
Neural sequence models, especially transformers, exhibit a remarkable capacity
forin-context learning . They can construct new predictors from sequences of
labeled examples (x,f(x))presented in the input without further parameter up-
dates. We investigate the hypothesis that transformer-based in-context learners
implement standard learning algorithms implicitly , by encoding smaller models
in their activations, and updating these implicit models as new examples appear
in the context. Using linear regression as a prototypical problem, we offer three
sources of evidence for this hypothesis. First, we prove by construction that trans-
formers can implement learning algorithms for linear models based on gradient
descent and closed-form ridge regression. Second, we show that trained in-context
learners closely match the predictors computed by gradient descent, ridge regres-
sion, and exact least-squares regression, transitioning between different predictors
as transformer depth and dataset noise vary, and converging to Bayesian estima-
tors for large widths and depths. Third, we present preliminary evidence that
in-context learners share algorithmic features with these predictors: learners’ late
layers non-linearly encode weight vectors and moment matrices. These results
suggest that in-context learning is understandable in algorithmic terms, and that
(at least in the linear case) learners may rediscover standard estimation algorithms.
1 I NTRODUCTION
One of the most surprising behaviors observed in large neural sequence models is in-context learn-
ing(ICL; Brown et al., 2020). When trained appropriately, models can map from sequences of
(x,f(x))pairs to accurate predictions f(x′)on novel inputs x′. This behavior occurs both in mod-
els trained on collections of few-shot learning problems (Chen et al., 2022; Min et al., 2022) and
surprisingly in large language models trained on open-domain text (Brown et al., 2020; Zhang et al.,
2022; Chowdhery et al., 2022). ICL requires a model to implicitly construct a map from in-context
examples to a predictor without any updates to the model’s parameters themselves. How can a neural
network with fixed parameters to learn a new function from a new dataset on the fly?
This paper investigates the hypothesis that some instances of ICL can be understood as implicit
implementation of known learning algorithms: in-context learners encode an implicit, context-
dependent model in their hidden activations, and train this model on in-context examples in the
course of computing these internal activations. As in recent investigations of empirical properties
of ICL (Garg et al., 2022; Xie et al., 2022), we study the behavior of transformer-based predictors
(Vaswani et al., 2017) on a restricted class of learning problems, here linear regression. Unlike
in past work, our goal is not to understand what functions ICL can learn, but how it learns these
functions: the specific inductive biases and algorithmic properties of transformer-based ICL.
In Section 3, we investigate theoretically what learning algorithms transformer decoders can imple-
ment. We prove by construction that they require only a modest number of layers and hidden units
to train linear models: for d-dimensional regression problems, with O(d)hidden size and constant
depth, a transformer can implement a single step of gradient descent; and with O(d2)hidden size
aCorrespondence to akyurek@mit.edu. Ekin is a student at MIT, and began this work while he was intern
at Google Research. Code and reference implementations are released at this web page
bThe work is done when Tengyu Ma works as a visiting researcher at Google Research.
1arXiv:2211.15661v3 [cs.LG] 17 May 2023 |
2303.04671.pdf | Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
Chenfei Wu Shengming Yin Weizhen Qi Xiaodong Wang Zecheng Tang Nan Duan*
Microsoft Research Asia
{chewu, v-sheyin, t-weizhenqi, v-xiaodwang, v-zetang, nanduan }@microsoft.com
Abstract
ChatGPT is attracting a cross-field interest as it provides
a language interface with remarkable conversational com-
petency and reasoning capabilities across many domains.
However, since ChatGPT is trained with languages, it is
currently not capable of processing or generating images
from the visual world. At the same time, Visual Foundation
Models, such as Visual Transformers or Stable Diffusion,
although showing great visual understanding and genera-
tion capabilities, they are only experts on specific tasks with
one-round fixed inputs and outputs. To this end, We build
a system called Visual ChatGPT , incorporating different
Visual Foundation Models, to enable the user to interact
with ChatGPT by 1) sending and receiving not only lan-
guages but also images 2) providing complex visual ques-
tions or visual editing instructions that require the collabo-
ration of multiple AI models with multi-steps. 3) providing
feedback and asking for corrected results. We design a se-
ries of prompts to inject the visual model information into
ChatGPT, considering models of multiple inputs/outputs
and models that require visual feedback. Experiments show
that Visual ChatGPT opens the door to investigating the
visual roles of ChatGPT with the help of Visual Founda-
tion Models. Our system is publicly available at https:
//github.com/microsoft/visual-chatgpt .
1. Introduction
In recent years, the development of Large language mod-
els (LLMs) has shown incredible progress, such as T5 [32],
BLOOM [36], and GPT-3 [5]. One of the most significant
breakthroughs is ChatGPT, which is built upon Instruct-
GPT [29], specifically trained to interact with users in a gen-
uinely conversational manner, thus allowing it to maintain
the context of the current conversation, handle follow-up
questions, and correct answer produced by itself.
Although powerful, ChatGPT is limited in its ability
to process visual information since it is trained with a
*Corresponding author.
BLIPStable
Diffusion
ControlNet
Pix2Pix
DetectionVisual Foundation Models User Query
please generate a red
flower conditioned on
the predicted depth of
this image and then
make it like a cartoon,
step by step
Iterative Reasoning Outputs
Here you are.
What else
can I help
you?
ChatGPTPrompt
Manager
Figure 1. Architecture of Visual ChatGPT.
single language modality, while Visual Foundation Mod-
els (VFMs) have shown tremendous potential in computer
vision, with their ability to understand and generate com-
plex images. For instance, BLIP Model [22] is an expert
in understanding and providing the description of an image.
Stable Diffusion [35] is an expert in synthesizing an image
based on text prompts. However, suffering from the task
specification nature, the demanding and fixed input-output
formats make the VFMs less flexible than conversational
language models in human-machine interaction.
Could we build a ChatGPT-like system that also supports
image understanding and generation? One intuitive idea
is to train a multi-modal conversational model. However,
building such a system would consume a large amount of
data and computational resources. Besides, another chal-
lenge comes that what if we want to incorporate modalities
beyond languages and images, like videos or voices? Would
it be necessary to train a totally new multi-modality model
every time when it comes to new modalities or functions?
We answer the above questions by proposing a system
named Visual ChatGPT . Instead of training a new multi-
modal ChatGPT from scratch, we build Visual ChatGPT
directly based on ChatGPT and incorporate a variety of
VFMs. To bridge the gap between ChatGPT and these
VFMs, we propose a Prompt Manager which supports the
following functions: 1) explicitly tells ChatGPT the capa-
1arXiv:2303.04671v1 [cs.CV] 8 Mar 2023 |
2310.03026.pdf | Preprint
LANGUAGE MPC: L ARGE LANGUAGE MODELS AS DE-
CISION MAKERS FOR AUTONOMOUS DRIVING
Hao Sha1, Yao Mu2, Yuxuan Jiang1, Guojian Zhan1, Li Chen2, Chenfeng Xu3, Ping Luo2,
Shengbo Eben Li1, Masayoshi Tomizuka3, Wei Zhan3, and Mingyu Ding3,†
1Tsinghua University
2The University of Hong Kong
3University of California, Berkeley
ABSTRACT
Existing learning-based autonomous driving (AD) systems face challenges in
comprehending high-level information, generalizing to rare events, and providing
interpretability. To address these problems, this work employs Large Language
Models (LLMs) as a decision-making component for complex AD scenarios that
require human commonsense understanding. We devise cognitive pathways to en-
able comprehensive reasoning with LLMs, and develop algorithms for translating
LLM decisions into actionable driving commands. Through this approach, LLM
decisions are seamlessly integrated with low-level controllers by guided parameter
matrix adaptation. Extensive experiments demonstrate that our proposed method
not only consistently surpasses baseline approaches in single-vehicle tasks, but
also helps handle complex driving behaviors even multi-vehicle coordination,
thanks to the commonsense reasoning capabilities of LLMs. This paper presents
an initial step toward leveraging LLMs as effective decision-makers for intricate
AD scenarios in terms of safety, efficiency, generalizability, and interoperability.
We aspire for it to serve as inspiration for future research in this field. Project
page: https://sites.google.com/view/llm-mpc .
1 I NTRODUCTION
Imagine you are behind the wheel, approaching an unsignalized intersection and planning to turn
left, with an oncoming vehicle straight ahead. Human drivers intuitively know that according to
traffic rules, they should slow down and yield, even if it is technically possible to speed through.
However, existing advanced learning-based Autonomous Driving (AD) systems typically require
complex rules or reward function designs to handle such scenarios effectively (Chen et al., 2023;
Kiran et al., 2022). This reliance on predefined rule bases often limits their ability to generalize to
various situations.
Another challenge facing existing learning-based AD systems is the long-tail problem (Buhet et al.,
2019). Both limited datasets and sampling efficiency (Atakishiyev et al., 2023) can present chal-
lenges for existing learning-based AD systems when making decisions in rare real-world driving
scenarios. Chauffeurnet (Bansal et al., 2018) demonstrated such limits where even 30 million state-
action samples were insufficient to learn an optimal policy that mapped bird’s-eye view images
(states) to control (action).
Furthermore, the lack of interpretability (Gohel et al., 2021) is a pressing issue for existing learning-
based AD systems. A mature AD system must possess interpretability to gain recognition within
society and regulatory entities, allowing it to be subject to targeted optimization and iterative im-
provements. Nevertheless, existing learning-based AD systems inherently resemble black boxes,
making it challenging to discern their decision-making processes or understand the rationale behind
†Corresponding author.
1arXiv:2310.03026v1 [cs.RO] 4 Oct 2023 |
2404.14387.pdf | A Survey on Self-Evolution of Large Language Models
Zhengwei Tao12*, Ting-En Lin2, Xiancai Chen1, Hangyu Li2, Yuchuan Wu2,
Yongbin Li2†,Zhi Jin1†,Fei Huang2,Dacheng Tao3,Jingren Zhou2
1Key Lab of HCST (PKU), MOE; School of Computer Science, Peking University
2Alibaba Group3Nanyang Technological University
{tttzw, xiancaich}@stu.pku.edu.cn ,zhijin@pku.edu.cn
{ting-en.lte, shengxiu.wyc, shuide.lyb, jingren.zhou}@alibaba-inc.com
dacheng.tao@ntu.edu.sg
Abstract
Large language models (LLMs) have sig-
nificantly advanced in various fields and
intelligent agent applications. However,
current LLMs that learn from human or
external model supervision are costly and may
face performance ceilings as task complexity
and diversity increase. To address this issue,
self-evolution approaches that enable LLM
to autonomously acquire, refine, and learn
from experiences generated by the model
itself are rapidly growing. This new training
paradigm inspired by the human experiential
learning process offers the potential to scale
LLMs towards superintelligence. In this
work, we present a comprehensive survey
of self-evolution approaches in LLMs. We
first propose a conceptual framework for
self-evolution and outline the evolving process
as iterative cycles composed of four phases:
experience acquisition, experience refinement,
updating, and evaluation. Second, we cate-
gorize the evolution objectives of LLMs and
LLM-based agents; then, we summarize the
literature and provide taxonomy and insights
for each module. Lastly, we pinpoint existing
challenges and propose future directions to
improve self-evolution frameworks, equipping
researchers with critical insights to fast-track
the development of self-evolving LLMs. Our
corresponding GitHub repository is available
at https://github.com/AlibabaResearch/DAMO-
ConvAI/tree/main/Awesome-Self-Evolution-
of-LLM.
1 Introduction
With the rapid development of artificial intelli-
gence, large language models (LLMs) like GPT-
3.5 (Ouyang et al., 2022), GPT-4 (Achiam et al.,
2023), Gemini (Team et al., 2023), LLaMA (Tou-
vron et al., 2023a,b), and Qwen (Bai et al., 2023)
*Work done while interning at Alibaba Group.
†Corresponding authors.mark a significant shift in language understand-
ing and generation. These models undergo three
stages of development as shown in Figure 1: pre-
training on large and diverse corpora to gain a gen-
eral understanding of language and world knowl-
edge (Devlin et al., 2018; Brown et al., 2020),
followed by supervised fine-tuning to elicit the
abilities of downstream tasks (Raffel et al., 2020;
Chung et al., 2022). Finally, the human prefer-
ence alignment training enables the LLMs to re-
spond as human behaviors (Ouyang et al., 2022).
Such successive training paradigms achieve signif-
icant breakthroughs, enabling LLMs to perform
a wide range of tasks with remarkable zero-shot
and in-context capabilities, such as question an-
swering (Tan et al., 2023), mathematical reason-
ing (Collins et al., 2023), code generation (Liu
et al., 2024b), and task-solving that require interac-
tion with environments (Liu et al., 2023b).
Despite these advancements, humans anticipate
that the emerging generation of LLMs can be
tasked with assignments of greater complexity,
such as scientific discovery (Miret and Krishnan,
2024) and future events forecasting (Schoenegger
et al., 2024). However, current LLMs encounter
challenges in these sophisticated tasks due to the
inherent difficulties in modeling, annotation, and
the evaluation associated with existing training
paradigms (Burns et al., 2023). Furthermore, the
recently developed Llama-3 model has been trained
on an extensive corpus comprising 15 trillion to-
kens1. It’s a monumental volume of data, suggest-
ing that significantly scaling model performance
by adding more real-world data could pose a limi-
tation. This has attracted interest in self-evolving
mechanisms for LLMs, akin to the natural evolu-
tion of human intelligence and illustrated by AI de-
velopments in gaming, such as the transition from
1https://huggingface.co/meta-llama/Meta-Llama-3-70B-
InstructarXiv:2404.14387v1 [cs.CL] 22 Apr 2024 |
2302.13971.pdf | LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron∗, Thibaut Lavril∗, Gautier Izacard∗, Xavier Martinet
Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal
Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin
Edouard Grave∗, Guillaume Lample∗
Meta AI
Abstract
We introduce LLaMA, a collection of founda-
tion language models ranging from 7B to 65B
parameters. We train our models on trillions
of tokens, and show that it is possible to train
state-of-the-art models using publicly avail-
able datasets exclusively, without resorting
to proprietary and inaccessible datasets. In
particular, LLaMA-13B outperforms GPT-3
(175B) on most benchmarks, and LLaMA-
65B is competitive with the best models,
Chinchilla-70B and PaLM-540B. We release
all our models to the research community1.
1 Introduction
Large Languages Models (LLMs) trained on mas-
sive corpora of texts have shown their ability to per-
form new tasks from textual instructions or from a
few examples (Brown et al., 2020). These few-shot
properties first appeared when scaling models to a
sufficient size (Kaplan et al., 2020), resulting in a
line of work that focuses on further scaling these
models (Chowdhery et al., 2022; Rae et al., 2021).
These efforts are based on the assumption that
more parameters will lead to better performance.
However, recent work from Hoffmann et al. (2022)
shows that, for a given compute budget, the best
performances are not achieved by the largest mod-
els, but by smaller models trained on more data.
The objective of the scaling laws from Hoff-
mann et al. (2022) is to determine how to best
scale the dataset and model sizes for a particular
training compute budget. However, this objective
disregards the inference budget, which becomes
critical when serving a language model at scale.
In this context, given a target level of performance,
the preferred model is not the fastest to train but the
fastest at inference, and although it may be cheaper
to train a large model to reach a certain level of
∗Equal contribution. Correspondence: {htouvron,
thibautlav,gizacard,egrave,glample}@meta.com
1https://github.com/facebookresearch/llamaperformance, a smaller one trained longer will
ultimately be cheaper at inference. For instance,
although Hoffmann et al. (2022) recommends
training a 10B model on 200B tokens, we find
that the performance of a 7B model continues to
improve even after 1T tokens.
The focus of this work is to train a series of
language models that achieve the best possible per-
formance at various inference budgets, by training
on more tokens than what is typically used. The
resulting models, called LLaMA , ranges from 7B
to 65B parameters with competitive performance
compared to the best existing LLMs. For instance,
LLaMA-13B outperforms GPT-3 on most bench-
marks, despite being 10 ×smaller. We believe that
this model will help democratize the access and
study of LLMs, since it can be run on a single GPU.
At the higher-end of the scale, our 65B-parameter
model is also competitive with the best large lan-
guage models such as Chinchilla or PaLM-540B.
Unlike Chinchilla, PaLM, or GPT-3, we only
use publicly available data, making our work com-
patible with open-sourcing, while most existing
models rely on data which is either not publicly
available or undocumented (e.g. “Books – 2TB” or
“Social media conversations”). There exist some
exceptions, notably OPT (Zhang et al., 2022),
GPT-NeoX (Black et al., 2022), BLOOM (Scao
et al., 2022) and GLM (Zeng et al., 2022), but none
that are competitive with PaLM-62B or Chinchilla.
In the rest of this paper, we present an overview
of the modifications we made to the transformer
architecture (Vaswani et al., 2017), as well as our
training method. We then report the performance of
our models and compare with others LLMs on a set
of standard benchmarks. Finally, we expose some
of the biases and toxicity encoded in our models,
using some of the most recent benchmarks from
the responsible AI community.arXiv:2302.13971v1 [cs.CL] 27 Feb 2023 |
2306.04050.pdf | arXiv:2306.04050v2 [cs.IT] 26 Jun 20231
LLMZip: Lossless Text Compression using Large
Language Models
Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Di leep Kalathil,
Jean-Francois Chamberland, Srinivas Shakkottai
Department of Electrical and Computer Engineering
Texas A&M University
Email:{vcskaushik9,krn,dileep.kalathil,chmbrlnd,ssh akkot}@tamu.edu
Abstract
We provide new estimates of an asymptotic upper bound on the e ntropy of English using the large language model LLaMA-7B
as a predictor for the next token given a window of past tokens . This estimate is significantly smaller than currently avai lable
estimates in [1], [2]. A natural byproduct is an algorithm fo r lossless compression of English text which combines the pr ediction
from the large language model with a lossless compression sc heme. Preliminary results from limited experiments sugges t that our
scheme outperforms state-of-the-art text compression sch emes such as BSC, ZPAQ, and paq8h.
I. I NTRODUCTION
There are close connections between learning, prediction, and compression. The success of ChatGPT has captured the
fascination of general public and brought the connection be tween learning and prediction to the fore. The main advance
brought about by large language models such as LLaMA and GPT- 4 is that they excel at predicting the next word (token) in
a paragraph based on knowing the past several words (tokens) .
The connection between prediction and compression was expl ored as early as 1951 by Shannon in order to estimate the
entropy of the English language [3]. The idea that a good pred ictor for the ith value in a time series based on the past
values can be effectively converted to a good compression al gorithm has played a prominent role in information theory. M any
algorithms for speech, image, and video compression exploi t this idea either explicitly or implicitly. Within the cont ext of
lossless compression of English text, the idea of combining a language model with arithmetic coding has emerged as a very
effective paradigm [4]. The performance of such a compressi on scheme depends substantially on the efficacy of the predic tor
and every time there is a major advance in the prediction capa bility, it behooves us to study its effect on the compression
performance. Indeed, in 2018, the authors of [5] used recurr ent neural networks (RNN) as the predictor and reported impr oved
results for certain kinds of sources. Their scheme still did not outperform state-of-the-art algorithms such as BSC and ZPAQ
for text compression.
It is therefore natural at this time to study whether we can ob tain better compression results and sharper estimates of th e
entropy of the English language using recent large language models such as LLaMA-7B [6]. This is the main goal of this
paper. We show that when the LLaMA-7B large language model is used as the predictor, the asymptotic upper bound on the
entropy is 0.709 bits/character when estimated using a 1MB s ection of the text8 dataset. This is smaller than earlier est imates
provided in [1] and [2, Table 4]. The estimate of the upper bou nd increases to 0.85 bits/character for a 100 KB section of
the text from [7], which is still lower than the estimates in [ 2]. When LLaMA-7B is combined with an Arithmetic coder for
compression, we obtain a compression ratio of 0.7101 bits/c haracter on a 1MB section of the text8 dataset and a compressi on
ratio of 0.8426 bits/character on a 100KB section of a text fr om [7], which are significantly better than the compression r atio
obtained using BSC, ZPAQ and pq8h on the full 100MB of the text 8 dataset.
II. I NTUITIVE EXPLANATION OF THE MAIN IDEA
We will use the following example to describe the main idea, w hich is nearly identical to that proposed by Shannon in [3]
for estimating the entropy of English. The main difference i s in the use of tokens which represent groups of letters of var iable
length and in the use of a large language model instead of a hum an to predict the next token. Consider a part of the sentence
that reads as
My first attempt at writing a book
Our goal is to convert this sentence into a sequence of bits wi th the least possible length such that the original sequence can
be reconstructed from the sequence of bits. This sentence ca n first be split into a sequence of words (tokens)
′My′,′first′,′attempt′,′at′,′writing′,′a′,′book′
A language model with memory M(for example, say M= 4) predicts the next word in the sentence based on observing th e
pastMwords. Specifically, it produces a rank-ordered list of choi ces for the next word and their probabilities. As shown in |
2404.00245.pdf | Aligning Large Language Models with Recommendation Knowledge
Yuwei Cao1*, Nikhil Mehta2, Xinyang Yi2, Raghunandan Keshavan3,
Lukasz Heldt3, Lichan Hong2, Ed H. Chi2, and Maheswaran Sathiamoorthy4
1University of Illinois Chicago2Google DeepMind3Google
1ycao43@uic.edu2{nikhilmehta ,xinyang }@google.com
4mahesh@smahesh.com
Abstract
Large language models (LLMs) have recently
been used as backbones for recommender sys-
tems. However, their performance often lags
behind conventional methods in standard tasks
like retrieval. We attribute this to a mis-
match between LLMs’ knowledge and the
knowledge crucial for effective recommenda-
tions. While LLMs excel at natural language
reasoning, they cannot model complex user-
item interactions inherent in recommendation
tasks. We propose bridging the knowledge gap
and equipping LLMs with recommendation-
specific knowledge to address this. Opera-
tions such as Masked Item Modeling (MIM)
and Bayesian Personalized Ranking (BPR)
have found success in conventional recom-
mender systems. Inspired by this, we sim-
ulate these operations through natural lan-
guage to generate auxiliary-task data samples
that encode item correlations and user prefer-
ences. Fine-tuning LLMs on such auxiliary-
task data samples and incorporating more in-
formative recommendation-task data samples
facilitates the injection of recommendation-
specific knowledge into LLMs. Extensive ex-
periments across retrieval, ranking, and rating
prediction tasks on LLMs such as FLAN-T5-
Base and FLAN-T5-XL show the effectiveness
of our technique in domains such as Amazon
Toys & Games, Beauty, and Sports & Outdoors.
Notably, our method outperforms conventional
and LLM-based baselines, including the cur-
rent SOTA, by significant margins in retrieval,
showcasing its potential for enhancing recom-
mendation quality.
1 Introduction
Large language models (LLMs) exhibit strong gen-
eralization abilities through zero-shot learning, in-
context learning (Brown et al., 2020), fine-tuning,
and instruction tuning (Wei et al., 2022). Encour-
aged by this, recent studies explore the use of
*Work done when interning at Google.LLMs as backbones in recommendation (Kang
et al., 2023; Geng et al., 2022; Zhang et al., 2023;
Bao et al., 2023). Despite their great potential,
LLMs are inferior to supervised recommenders
(He et al., 2017; Rendle et al., 2009) in recom-
mendation tasks such as rating-prediction under
zero-shot and few-shot in-context learning settings
(Kang et al., 2023). We hypothesize that this stems
from a gap between LLMs’ knowledge and rec-
ommendation knowledge: LLMs are proficient at
natural language reasoning, while recommendation
involves modeling complex user-item interactions.
In this work, we propose to mitigate this gap by
fine-tuning LLMs with data samples that encode
recommendation knowledge.
Recent works (Geng et al., 2022; Zhang et al.,
2023; Bao et al., 2023) show that certain recom-
mendation knowledge can be introduced into LLMs
through instruction tuning. As shown in Figure
1(a), their training data samples, which we refer
to as recommendation-task data samples , primar-
ily help LLMs understand the recommendation
tasks by providing instructions on what to do ( e.g.,
“Pick an item for the user from the following candi-
dates.”). In terms of modeling the target recommen-
dation domain, however, they present raw user and
item features for personalization ( e.g., the user’s
ID or the IDs of the items they recently interacted
with), which are insufficient for LLMs to fully com-
prehend the target domain.
Considering the aforementioned limitations of
using LLMs as recommenders, we propose a novel
approach to generate additional fine-tuning data
samples for LLMs that effectively encode recom-
mendation knowledge, particularly focusing on
item correlations within the target domain. We
refer to these generated data samples as auxiliary-
task data samples , as they are used as auxiliary
tasks in addition to the recommendations tasks.
While developing the auxiliary tasks, our key in-
spiration comes from the classical operations thatarXiv:2404.00245v1 [cs.IR] 30 Mar 2024 |
2312.10794.pdf | A MATHEMATICAL PERSPECTIVE ON TRANSFORMERS
BORJAN GESHKOVSKI, CYRIL LETROUIT, YURY POLYANSKIY,
AND PHILIPPE RIGOLLET
Abstract. Transformers play a central role in the inner workings of large
language models. We develop a mathematical framework for analyzing Trans-
formers based on their interpretation as interacting particle systems, which
reveals that clusters emerge in long time. Our study explores the underlying
theory and offers new perspectives for mathematicians as well as computer
scientists.
Contents
1. Outline 1
Part 1. Modeling 3
2. Interacting particle system 3
3. Measure to measure flow map 9
Part 2. Clustering 14
4. A single cluster in high dimension 14
5. A single cluster for small β 21
Part 3. Further questions 24
6. Dynamics on the circle 24
7. General matrices 26
8. Approximation and control 30
Acknowledgments 31
Appendix 31
Appendix A. Proof of Theorem 4.7 31
Appendix B. Proof of Theorem 5.1 34
Appendix C. Proof of Proposition 6.1 36
References 37
1.Outline
The introduction of Transformers in 2017 by Vaswani et al. [VSP`17] marked
a significant milestone in development of neural network architectures. Central to
2020Mathematics Subject Classification. Primary: 34D05, 34D06, 35Q83; Secondary: 52C17.
Key words and phrases. Transformers, self-attention, interacting particle systems, clustering,
gradient flows.
1arXiv:2312.10794v1 [cs.LG] 17 Dec 2023 |
2312.00752.pdf | Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Albert Gu *1and Tri Dao *2
1Machine Learning Department, Carnegie Mellon University
2Department of Computer Science, Princeton University
agu@cs.cmu.edu ,tri@tridao.me
Abstract
Foundation models, now powering most of the exciting applications in deep learning, are almost universally
based on the Transformer architecture and its core attention module. Many subquadratic-time architectures
such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs)
have been developed to address Transformers’ computational inefficiency on long sequences, but they have not
performed as well as attention on important modalities such as language. We identify that a key weakness of
such models is their inability to perform content-based reasoning, and make several improvements. First, simply
letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing
the model to selectively propagate or forget information along the sequence length dimension depending on
the current token. Second, even though this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified
end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast
inference (5 ×higher throughput than Transformers) and linear scaling in sequence length, and its performance
improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves
state-of-the-art performance across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice
its size, both in pretraining and downstream evaluation.
1 Introduction
Foundation models (FMs), or large models pretrained on massive data then adapted for downstream tasks, have
emerged as an effective paradigm in modern machine learning. The backbone of these FMs are often sequence
models, operating on arbitrary sequences of inputs from a wide variety of domains such as language, images,
speech, audio, time series, and genomics (Brown et al. 2020; Dosovitskiy et al. 2020; Ismail Fawaz et al. 2019;
Oord et al. 2016; Poli et al. 2023; Sutskever, Vinyals, and Quoc V Le 2014). While this concept is agnostic to
a particular choice of model architecture, modern FMs are predominantly based on a single type of sequence
model: the Transformer (Vaswani et al. 2017) and its core attention layer (Bahdanau, Cho, and Bengio 2015)
The efficacy of self-attention is attributed to its ability to route information densely within a context window,
allowing it to model complex data. However, this property brings fundamental drawbacks: an inability to model
anything outside of a finite window, and quadratic scaling with respect to the window length. An enormous body
of research has appeared on more efficient variants of attention to overcome these drawbacks (Tay, Dehghani,
Bahri, et al. 2022), but often at the expense of the very properties that makes it effective. As of yet, none of these
variants have been shown to be empirically effective at scale across domains.
Recently, structured state space sequence models (SSMs) (Gu, Goel, and Ré 2022; Gu, Johnson, Goel, et al. 2021)
have emerged as a promising class of architectures for sequence modeling. These models can be interpreted as a
combination of recurrent neural networks (RNNs) and convolutional neural networks (CNNs), with inspiration
from classical state space models (Kalman 1960). This class of models can be computed very efficiently as either a
recurrence or convolution, with linear or near-linear scaling in sequence length. Additionally, they have principled
*Equal contribution.
1 |
2404.14367v2.pdf | Preference Fine-Tuning of LLMs Should
Leverage Suboptimal, On-Policy Data
Fahim Tajwar1*, Anikait Singh2*, Archit Sharma2, Rafael Rafailov2, Jeff Schneider1, Tengyang Xie4, Stefano
Ermon2, Chelsea Finn2and Aviral Kumar3
*Equal contributions (ordered via coin-flip),1Carnegie Mellon University,2Stanford University,3Google DeepMind,4UW-Madison
Learning from preference labels plays a crucial role in fine-tuning large language models. There are several
distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning
(RL), and contrastive learning. Different methods come with different implementation tradeoffs and perfor-
mance differences, and existing empirical findings present different conclusions, for instance, some results show
thatonlineRLisquiteimportanttoattaingoodfine-tuningresults, whileothersfind(offline)contrastiveoreven
purely supervised methods sufficient. This raises a natural question: what kind of approaches are important
for fine-tuning with preference data and why? In this paper, we answer this question by performing a rigorous
analysis of a number of fine-tuning techniques on didactic and full-scale LLM problems. Our main finding is
that, in general, approaches that use on-policy sampling or attempt to push down the likelihood on certain
responses (i.e., employ a “negative gradient”) outperform offline and maximum likelihood objectives. We
conceptualize our insights and unify methods that use on-policy sampling or negative gradient under a notion
of mode-seeking objectives for categorical distributions. Mode-seeking objectives are able to alter probability
mass on specific bins of a categorical distribution at a fast rate compared to maximum likelihood, allowing
them to relocate masses across bins more effectively. Our analysis prescribes actionable insights for preference
fine-tuning of LLMs and informs how data should be collected for maximal improvement.
1. Introduction
Pre-training endows a large language model (LLM) with knowledge about the world. Yet, it does
not provide a lever to control responses from these models, especially when we want these solutions
to optimize some task-dependent success criteria (e.g., align with human preferences, optimize
correctnessorcompactness). ToalignLLMswithdownstreamsuccesscriteria, theyarethenfine-tuned
withdownstreamobjectivesafterpre-training. Inthispaper, wefocusonfine-tuningproblemsthataim
tooptimizeforbinarypreferences(fromhumansorotherAImodels). Aplethoraofmethodshavebeen
proposed for this sort of fine-tuning, including supervised learning on filtered responses (Gulcehre
et al., 2023), contrastive training (Rafailov et al., 2023), and on-policy reinforcement learning
(RL) (Ouyang et al., 2022) on a reward function extracted from human preferences.
In theory, while all of these methods aim to discover identical optimal policies, achieving this in
practice would require full data coverage and infinite computation. These requirements are not
met in practice, and hence, the choice of the loss function and the optimization procedure affects
performance. However, a lack of a clear understanding of different approaches, coupled with different
tradeoffs in implementation, has resulted in substantial confusion: practitioners are unsure as to: (1)
whether RL (Ouyang et al., 2022) is required at all, or contrastive approaches (Rafailov et al., 2023;
Gheshlaghi Azar et al., 2023), supervised fine-tuning are good enough; and (2)whether preference
data should be collected with models in the loop (i.e., in an “on-policy” fashion) or not.
Our goal is to provide clarity on these questions by performing a rigorous study to understand the
behavior of existing methods when optimizing for preferences. Our study operates under assumptions
typical in preference fine-tuning, including the existence of an underlying ground truth reward
function that explains the preference data. We study methods that train an LLM policy to optimize a
surrogate loss given by the expected reward under a model of the reward function (learned from
preference data) penalized by the KL-divergence between the policy and a reference policy.
Corresponding author(s): anikait@stanford.edu, ftajwar@cs.cmu.edu. Project Website: https://understanding-rlhf.github.io/arXiv:2404.14367v2 [cs.LG] 23 Apr 2024 |
10356-a-path-towards-autonomous-mach.pdf | A Path Towards Autonomous Machine Intelligence
Version 0.9.2, 2022-06-27
Yann LeCun
Courant Institute of Mathematical Sciences, New York University yann@cs.nyu.edu
Meta - Fundamental AI Research yann@fb.com
June 27, 2022
Abstract
How could machines learn as efficiently as humans and animals? How could ma-
chines learn to reason and plan? How could machines learn representations of percepts
and action plans at multiple levels of abstraction, enabling them to reason, predict,
and plan at multiple time horizons? This position paper proposes an architecture and
training paradigms with which to construct autonomous intelligent agents. It combines
concepts such as configurable predictive world model, behavior driven through intrinsic
motivation, and hierarchical joint embedding architectures trained with self-supervised
learning.
Keywords: Artificial Intelligence, Machine Common Sense, Cognitive Architecture, Deep
Learning, Self-Supervised Learning, Energy-Based Model, World Models, Joint Embedding
Architecture, Intrinsic Motivation.
1 Prologue
This document is not a technical nor scholarly paper in the traditional sense, but a position
paper expressing my vision for a path towards intelligent machines that learn more like
animals and humans, that can reason and plan, and whose behavior is driven by intrinsic
objectives, rather than by hard-wired programs, external supervision, or external rewards.
Many ideas described in this paper (almost all of them) have been formulated by many
authors in various contexts in various form. The present piece does not claim priority for
any of them but presents a proposal for how to assemble them into a consistent whole. In
particular, the piece pinpoints the challenges ahead. It also lists a number of avenues that
are likely or unlikely to succeed.
The text is written with as little jargon as possible, and using as little mathematical
prior knowledge as possible, so as to appeal to readers with a wide variety of backgrounds
including neuroscience, cognitive science, and philosophy, in addition to machine learning,
robotics, and other fields of engineering. I hope that this piece will help contextualize some
of the research in AI whose relevance is sometimes difficult to see.
1 |
2310.14189.pdf | IMPROVED TECHNIQUES FOR TRAINING
CONSISTENCY MODELS
Yang Song & Prafulla Dhariwal
OpenAI
{songyang,prafulla}@openai.com
ABSTRACT
Consistency models are a nascent family of generative models that can sample
high quality data in one step without the need for adversarial training. Current
consistency models achieve optimal sample quality by distilling from pre-trained
diffusion models and employing learned metrics such as LPIPS. However, distil-
lation limits the quality of consistency models to that of the pre-trained diffusion
model, and LPIPS causes undesirable bias in evaluation. To tackle these challenges,
we present improved techniques for consistency training , where consistency mod-
els learn directly from data without distillation. We delve into the theory behind
consistency training and identify a previously overlooked flaw, which we address
by eliminating Exponential Moving Average from the teacher consistency model.
To replace learned metrics like LPIPS, we adopt Pseudo-Huber losses from robust
statistics. Additionally, we introduce a lognormal noise schedule for the consis-
tency training objective, and propose to double total discretization steps every
set number of training iterations. Combined with better hyperparameter tuning,
these modifications enable consistency models to achieve FID scores of 2.51 and
3.25 on CIFAR-10 and ImageNet 64ˆ64respectively in a single sampling step.
These scores mark a 3.5 ˆand 4ˆimprovement compared to prior consistency
training approaches. Through two-step sampling, we further reduce FID scores to
2.24 and 2.77 on these two datasets, surpassing those obtained via distillation in
both one-step and two-step settings, while narrowing the gap between consistency
models and other state-of-the-art generative models.
1 I NTRODUCTION
Consistency models (Song et al., 2023) are an emerging family of generative models that produce
high-quality samples using a single network evaluation. Unlike GANs (Goodfellow et al., 2014),
consistency models are not trained with adversarial optimization and thus sidestep the associated
training difficulty. Compared to score-based diffusion models (Sohl-Dickstein et al., 2015; Song &
Ermon, 2019; 2020; Ho et al., 2020; Song et al., 2021), consistency models do not require numerous
sampling steps to generate high-quality samples. They are trained to generate samples in a single step,
but still retain important advantages of diffusion models, such as the flexibility to exchange compute
for sample quality through multistep sampling, and the ability to perform zero-shot data editing.
We can train consistency models using either consistency distillation (CD) or consistency training
(CT). The former requires pre-training a diffusion model and distilling the knowledge therein into a
consistency model. The latter allows us to train consistency models directly from data, establishing
them as an independent family of generative models. Previous work (Song et al., 2023) demonstrates
that CD significantly outperforms CT. However, CD adds computational overhead to the training
process since it requires learning a separate diffusion model. Additionally, distillation limits the
sample quality of the consistency model to that of the diffusion model. To avoid the downsides of
CD and to position consistency models as an independent family of generative models, we aim to
improve CT to either match or exceed the performance of CD.
For optimal sample quality, both CD and CT rely on learned metrics like the Learned Perceptual
Image Patch Similarity (LPIPS) (Zhang et al., 2018) in previous work (Song et al., 2023). However,
depending on LPIPS has two primary downsides. Firstly, there could be potential bias in evaluationarXiv:2310.14189v1 [cs.LG] 22 Oct 2023 |
2309.05858.pdf | Preprint
UNCOVERING MESA -OPTIMIZATION ALGORITHMS IN
TRANSFORMERS
Johannes von Oswald∗,1
ETH Zürich &
Google ResearchEyvind Niklasson∗
Google ResearchMaximilian Schlegel∗
ETH ZürichSeijin Kobayashi
ETH Zürich
Nicolas Zucchet
ETH ZürichNino Scherrer
Independent ResearcherNolan Miller
Google ResearchMark Sandler
Google Research
Blaise Agüera y Arcas
Google ResearchMax Vladymyrov
Google ResearchRazvan Pascanu
Google DeepMindJoão Sacramento1
ETH Zürich
ABSTRACT
Transformers have become the dominant model in deep learning, but the reason
for their superior performance is poorly understood. Here, we hypothesize that
the strong performance of Transformers stems from an architectural bias towards
mesa-optimization, a learned process running within the forward pass of a model
consisting of the following two steps: ( i) the construction of an internal learn-
ing objective, and ( ii) its corresponding solution found through optimization. To
test this hypothesis, we reverse-engineer a series of autoregressive Transformers
trained on simple sequence modeling tasks, uncovering underlying gradient-based
mesa-optimization algorithms driving the generation of predictions. Moreover, we
show that the learned forward-pass optimization algorithm can be immediately
repurposed to solve supervised few-shot tasks, suggesting that mesa-optimization
might underlie the in-context learning capabilities of large language models. Fi-
nally, we propose a novel self-attention layer, the mesa-layer, that explicitly and
efficiently solves optimization problems specified in context. We find that this
layer can lead to improved performance in synthetic and preliminary language
modeling experiments, adding weight to our hypothesis that mesa-optimization is
an important operation hidden within the weights of trained Transformers.
1 I NTRODUCTION
Transformers (Vaswani et al., 2017) and especially large language models (LLMs) are known to
strongly adjust their predictions and learn based on data given in-context (Brown et al., 2020).
Recently, a number of works have studied this phenomenon in detail by meta-learning Transformers
to solve few-shot tasks, providing labeled training sets in context. These studies discovered that
Transformers implement learning algorithms that either closely resemble or exactly correspond to
gradient-based optimizers (Garg et al., 2022; Akyürek et al., 2023; von Oswald et al., 2023; Kirsch
et al., 2022; Zhang et al., 2023; Mahankali et al., 2023; Ahn et al., 2023; Li et al., 2023a).
However, it remains unclear how well these findings on meta-trained Transformers translate to models
that are autoregressively-trained on sequential data, the prevalent LLM training setup. Here, we
address this question by building on the theoretical construction of von Oswald et al. (2023), and show
how Transformers trained on sequence modeling tasks predict using gradient-descent learning based
on in-context data. Thus, we demonstrate that minimizing a generic autoregressive loss gives rise to a
subsidiary gradient-based optimization algorithm running inside the forward pass of a Transformer.
This phenomenon has been recently termed mesa-optimization (Hubinger et al., 2019). Moreover, we
find that the resulting mesa-optimization algorithms exhibit in-context few-shot learning capabilities,
independently of model scale. Our results therefore complement previous reports characterizing the
emergence of few-shot learning in large-scale LLMs (Kaplan et al., 2020; Brown et al., 2020).
∗These authors contributed equally to this work.1Correspondence to jvoswald@google.com, rjoao@ethz.ch.
1arXiv:2309.05858v1 [cs.LG] 11 Sep 2023 |
1711.00937.pdf | Neural Discrete Representation Learning
Aaron van den Oord
DeepMind
avdnoord@google.comOriol Vinyals
DeepMind
vinyals@google.comKoray Kavukcuoglu
DeepMind
korayk@google.com
Abstract
Learning useful representations without supervision remains a key challenge in
machine learning. In this paper, we propose a simple yet powerful generative
model that learns such discrete representations. Our model, the Vector Quantised-
Variational AutoEncoder (VQ-V AE), differs from V AEs in two key ways: the
encoder network outputs discrete, rather than continuous, codes; and the prior
is learnt rather than static. In order to learn a discrete latent representation, we
incorporate ideas from vector quantisation (VQ). Using the VQ method allows the
model to circumvent issues of “posterior collapse” -— where the latents are ignored
when they are paired with a powerful autoregressive decoder -— typically observed
in the V AE framework. Pairing these representations with an autoregressive prior,
the model can generate high quality images, videos, and speech as well as doing
high quality speaker conversion and unsupervised learning of phonemes, providing
further evidence of the utility of the learnt representations.
1 Introduction
Recent advances in generative modelling of images [ 38,12,13,22,10], audio [ 37,26] and videos
[20,11] have yielded impressive samples and applications [ 24,18]. At the same time, challenging
tasks such as few-shot learning [ 34], domain adaptation [ 17], or reinforcement learning [ 35] heavily
rely on learnt representations from raw data, but the usefulness of generic representations trained in
an unsupervised fashion is still far from being the dominant approach.
Maximum likelihood and reconstruction error are two common objectives used to train unsupervised
models in the pixel domain, however their usefulness depends on the particular application the
features are used in. Our goal is to achieve a model that conserves the important features of the
data in its latent space while optimising for maximum likelihood. As the work in [ 7] suggests, the
best generative models (as measured by log-likelihood) will be those without latents but a powerful
decoder (such as PixelCNN). However, in this paper, we argue for learning discrete and useful latent
variables, which we demonstrate on a variety of domains.
Learning representations with continuous features have been the focus of many previous work
[16,39,6,9] however we concentrate on discrete representations [ 27,33,8,28] which are potentially
a more natural fit for many of the modalities we are interested in. Language is inherently discrete,
similarly speech is typically represented as a sequence of symbols. Images can often be described
concisely by language [ 40]. Furthermore, discrete representations are a natural fit for complex
reasoning, planning and predictive learning (e.g., if it rains, I will use an umbrella). While using
discrete latent variables in deep learning has proven challenging, powerful autoregressive models
have been developed for modelling distributions over discrete variables [37].
In our work, we introduce a new family of generative models succesfully combining the variational
autoencoder (V AE) framework with discrete latent representations through a novel parameterisation
of the posterior distribution of (discrete) latents given an observation. Our model, which relies on
vector quantization (VQ), is simple to train, does not suffer from large variance, and avoids the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.arXiv:1711.00937v2 [cs.LG] 30 May 2018 |
1806.07572.pdf | Neural Tangent Kernel:
Convergence and Generalization in Neural Networks
Arthur Jacot
´Ecole Polytechnique F ´ed´erale de Lausanne
arthur.jacot@netopera.net
Franck Gabriel
Imperial College London and ´Ecole Polytechnique F ´ed´erale de Lausanne
franckrgabriel@gmail.com
Cl´ement Hongler
´Ecole Polytechnique F ´ed´erale de Lausanne
clement.hongler@gmail.com
Abstract
At initialization, artificial neural networks (ANNs) are equivalent to Gaussian
processes in the infinite-width limit ( 16;4;7;13;6), thus connecting them to
kernel methods. We prove that the evolution of an ANN during training can also
be described by a kernel: during gradient descent on the parameters of an ANN,
the network function fθ(which maps input vectors to output vectors) follows the
kernel gradient of the functional cost (which is convex, in contrast to the parameter
cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central
to describe the generalization features of ANNs. While the NTK is random at
initialization and varies during training, in the infinite-width limit it converges
to an explicit limiting kernel and it stays constant during training. This makes it
possible to study the training of ANNs in function space instead of parameter space.
Convergence of the training can then be related to the positive-definiteness of the
limiting NTK. We prove the positive-definiteness of the limiting NTK when the
data is supported on the sphere and the non-linearity is non-polynomial.
We then focus on the setting of least-squares regression and show that in the infinite-
width limit, the network function fθfollows a linear differential equation during
training. The convergence is fastest along the largest kernel principal components
of the input data with respect to the NTK, hence suggesting a theoretical motivation
for early stopping.
Finally we study the NTK numerically, observe its behavior for wide networks,
and compare it to the infinite-width limit.
1 Introduction
Artificial neural networks (ANNs) have achieved impressive results in numerous areas of machine
learning. While it has long been known that ANNs can approximate any function with sufficiently
many hidden neurons ( 11;14), it is not known what the optimization of ANNs converges to. Indeed
the loss surface of neural networks optimization problems is highly non-convex: it has a high number
of saddle points which may slow down the convergence ( 5). A number of results ( 3;17;18) suggest
that for wide enough networks, there are very few “bad” local minima, i.e. local minima with much
32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr ´eal, Canada.arXiv:1806.07572v4 [cs.LG] 10 Feb 2020 |
riemann.pdf | A Selbergian Approach to the Riemann Hypothesis
via Mochizuki’s Interuniversal Teichm¨ uller Theory
HOLOQ
March 25, 2024
Abstract
We present a novel approach to proving the Riemann Hypothesis by exploit-
ing deep connections between the Selberg trace formula, the Euler-Riemann-Siegel
theta function, and Mochizuki’s interuniversal Teichm¨ uller theory. Our main re-
sult is a conjectural equivalence between the vanishing of a certain integral kernel,
constructed using transfinite Atyiah-Hirzebruch-Dwork spectral zetafolds, and the
truth of the Riemann Hypothesis. This equivalence arises from the interplay be-
tween Weil-Deligne-Langlands autoequivalences of ´ etale motivic Galois groups and
the Connes-Kreimer cosmic Galois symmetry.
Contents
1 Introduction 1
2 Preliminaries 2
2.1 The Riemann Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Selberg Trace Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Interuniversal Teichm¨ uller Theory . . . . . . . . . . . . . . . . . . . . . . 3
3 Atyiah-Hirzebruch-Dwork Spectral Zetafolds 3
3.1 Transfinite Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
4 The Integral Kernel 4
5 Main Conjecture 5
6 Future Directions 5
1 Introduction
The Riemann Hypothesis, first posited by Bernhard Riemann in 1859, is a conjecture
about the location of the non-trivial zeros of the Riemann zeta function ζ(s). It states
that all such zeros have real part equal to1
2. Despite its deceptively simple formulation,
the Riemann Hypothesis has remained unproven for over 160 years, and is considered one
of the most important open problems in mathematics.
1 |
2010.11929.pdf | Published as a conference paper at ICLR 2021
ANIMAGE IS WORTH 16X16 W ORDS :
TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE
Alexey Dosovitskiy∗,†, Lucas Beyer∗, Alexander Kolesnikov∗, Dirk Weissenborn∗,
Xiaohua Zhai∗, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer,
Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby∗,†
∗equal technical contribution,†equal advising
Google Research, Brain Team
{adosovitskiy, neilhoulsby }@google.com
ABSTRACT
While the Transformer architecture has become the de-facto standard for natural
language processing tasks, its applications to computer vision remain limited. In
vision, attention is either applied in conjunction with convolutional networks, or
used to replace certain components of convolutional networks while keeping their
overall structure in place. We show that this reliance on CNNs is not necessary
and a pure transformer applied directly to sequences of image patches can perform
very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks
(ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent
results compared to state-of-the-art convolutional networks while requiring sub-
stantially fewer computational resources to train.1
1 I NTRODUCTION
Self-attention-based architectures, in particular Transformers (Vaswani et al., 2017), have become
the model of choice in natural language processing (NLP). The dominant approach is to pre-train on
a large text corpus and then fine-tune on a smaller task-specific dataset (Devlin et al., 2019). Thanks
to Transformers’ computational efficiency and scalability, it has become possible to train models of
unprecedented size, with over 100B parameters (Brown et al., 2020; Lepikhin et al., 2020). With the
models and datasets growing, there is still no sign of saturating performance.
In computer vision, however, convolutional architectures remain dominant (LeCun et al., 1989;
Krizhevsky et al., 2012; He et al., 2016). Inspired by NLP successes, multiple works try combining
CNN-like architectures with self-attention (Wang et al., 2018; Carion et al., 2020), some replacing
the convolutions entirely (Ramachandran et al., 2019; Wang et al., 2020a). The latter models, while
theoretically efficient, have not yet been scaled effectively on modern hardware accelerators due to
the use of specialized attention patterns. Therefore, in large-scale image recognition, classic ResNet-
like architectures are still state of the art (Mahajan et al., 2018; Xie et al., 2020; Kolesnikov et al.,
2020).
Inspired by the Transformer scaling successes in NLP, we experiment with applying a standard
Transformer directly to images, with the fewest possible modifications. To do so, we split an image
into patches and provide the sequence of linear embeddings of these patches as an input to a Trans-
former. Image patches are treated the same way as tokens (words) in an NLP application. We train
the model on image classification in supervised fashion.
When trained on mid-sized datasets such as ImageNet without strong regularization, these mod-
els yield modest accuracies of a few percentage points below ResNets of comparable size. This
seemingly discouraging outcome may be expected: Transformers lack some of the inductive biases
1Fine-tuning code and pre-trained models are available at https://github.com/
google-research/vision_transformer
1arXiv:2010.11929v2 [cs.CV] 3 Jun 2021 |
2404.10719.pdf | Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
Shusheng Xu1Wei Fu1Jiaxuan Gao1Wenjie Ye2Weilin Liu2
Zhiyu Mei1Guangju Wang2Chao Yu* 1Yi Wu* 1 2 3
Abstract
Reinforcement Learning from Human Feedback
(RLHF) is currently the most widely used method
to align large language models (LLMs) with hu-
man preferences. Existing RLHF methods can
be roughly categorized as either reward-based or
reward-free . Novel applications such as ChatGPT
and Claude leverage reward-based methods that
first learn a reward model and apply actor-critic
algorithms, such as Proximal Policy Optimiza-
tion (PPO). However, in academic benchmarks,
the state-of-the-art results are often achieved via
reward-free methods, such as Direct Preference
Optimization (DPO). Is DPO truly superior to
PPO? Why does PPO perform poorly on these
benchmarks? In this paper, we first conduct both
theoretical and empirical studies on the algorith-
mic properties of DPO and show that DPO may
have fundamental limitations. Moreover, we also
comprehensively examine PPO and reveal the key
factors for the best performances of PPO in fine-
tuning LLMs. Finally, we benchmark DPO and
PPO across various a collection of RLHF testbeds,
ranging from dialogue to code generation. Ex-
periment results demonstrate that PPO is able to
surpass other alignment methods in all cases and
achieve state-of-the-art results in challenging code
competitions.
1. Introduction
Large Language Models (LLMs) derive their extensive lan-
guage patterns and knowledge through pre-training on sub-
stantial textual datasets (Brown et al., 2020; OpenAI, 2023;
Touvron et al., 2023; Chowdhery et al., 2023; Anil et al.,
2023). To leverage the formidable capabilities of LLMs in
practical applications, a growing amount of research has
*Co-corresponding authors.1Tsinghua University, Beijing
China2OpenPsi Inc.3Shanghai Qi Zhi Institute, Shanghai, China.
Correspondence to: Shusheng Xu <xssstory@gmail.com >, Chao
Yu<zoeyuchao@gmail.com >, Yi Wu <jxwuyi@gmail.com >.underscored the importance of aligning these models with
human preferences (Agrawal et al., 2023; Kadavath et al.,
2022; Shi et al., 2023; Liang et al., 2021; Sheng et al., 2019).
Various methods have been developed for fine-tuning LLMs,
with popular approaches including Supervised Fine-Tuning
(SFT) (Peng et al., 2023) and Reinforcement Learning from
Human Feedback (RLHF) (Ziegler et al., 2019; Stiennon
et al., 2020; Ouyang et al., 2022). Typically, fine-tuning in-
volves two phases: SFT to establish a base model, followed
by RLHF for enhanced performance. SFT involves imitat-
ing high-quality demonstration data, while RLHF refines
LLMs through preference feedback.
Within RLHF, two prominent approaches are reward-based
and reward-free methods. Reward-based methods, pio-
neered by OpenAI (Ouyang et al., 2022; Ziegler et al., 2019;
Stiennon et al., 2020), construct a reward model using pref-
erence data and then employ actor-critic algorithms like
Proximal Policy Optimization (PPO) to optimize the re-
ward signal. In contrast, reward-free methods, including Di-
rect Preference Optimization (DPO) (Rafailov et al., 2023),
RRHF (Yuan et al., 2023), and PRO (Song et al., 2023),
eliminate the explicit use of a reward function. DPO, a
representative reward-free method, expresses the reward
function in a logarithmic form of the policy and focuses
solely on policy optimization.
Notably, the most successful applications like Chat-
GPT (OpenAI, 2022) and Claude (Antropic, 2023) are pro-
duced by the reward-based RLHF method PPO, while strong
performances in academic benchmarks often result from the
reward-free RLHF method DPO (Rafailov et al., 2023; Mis-
tralAI, 2023). This discrepancy raises two fundamental
questions: 1) Is DPO truly superior to PPO in the RLHF do-
main? and 2) Can the performance of PPO be substantially
improved in common RLHF benchmarks? In this paper, we
delve into these questions. Through theoretical and empir-
ical analysis, we uncover the fundamental limitations of
DPO and explore critical factors that enhance the practical
performance of PPO in RLHF.
First, our theoretical examination reveals that DPO might
find biased solutions that exploit out-of-distribution re-
sponses. Empirically, we demonstrate that the performance
of DPO is significantly affected by the distribution shift
1arXiv:2404.10719v1 [cs.CL] 16 Apr 2024 |
2305.14699.pdf | Can Transformers Learn to Solve Problems Recursively?
Shizhuo Dylan Zhang1Curt Tigges2Stella Biderman2,3Maxim Raginsky1Talia Ringer1
1University of Illinois Urbana-Champaign2EleutherAI3Booz Allen Hamilton
{shizhuo2,maxim,tringer}@illinois.edu
{curt,stella}@eleuther.ai
Abstract
Neural networks have in recent years shown promise for helping software engineers
write programs and even formally verify them. While semantic information plays
a crucial part in these processes, it remains unclear to what degree popular neural
architectures like transformers are capable of modeling that information.
This paper examines the behavior of neural networks learning algorithms relevant to
programs and formal verification proofs through the lens of mechanistic interpretability,
focusing in particular on structural recursion. Structural recursion is at the heart of tasks
on which symbolic tools currently outperform neural models, like inferring semantic
relations between datatypes and emulating program behavior.
We evaluate the ability of transformer models to learn to emulate the behavior of
structurally recursive functions from input-output examples. Our evaluation includes
empirical and conceptual analyses of the limitations and capabilities of transformer
models in approximating these functions, as well as reconstructions of the “shortcut”
algorithms the model learns. By reconstructing these algorithms, we are able to
correctly predict 91% of failure cases for one of the approximated functions. Our work
provides a new foundation for understanding the behavior of neural networks that fail
to solve the very tasks they are trained for.
1 Introduction
A revolution in neural methods for programming languages tasks is underway. Once confined to the realm
of symbolic methods, some of the most performant tools for synthesizing [3, 4, 21, 5], repairing [16, 35,
41], and even formally verifying [1, 42, 12, 37, 36, 13] programs now rest in part or in whole upon neural
foundations.
But how sturdy are these foundations? At the core of many of these tools are transformer-based large
language models [4, 5, 13]. It is an open question to what degree these models are simply repeating
program syntax, and to what degree they have some model of program semantics —how programs
behave and what they mean. State-of-the-art language models still rely on tricks like chain of thought
prompting [40] and scratchpadding [28] to approximate program semantics. Even models trained on code
often need to be finetuned to solve specific tasks instead of used in a multitask fashion [4, 2, 20].
In this paper, we investigate the degree to which small transformer [38] models can learn to model the
semantics of an important class of programs: structural recursion . A program is an example of structural
recursion if it is defined over some data structure (say, binary trees) by recursively calling itself over smaller
substructures (say, left and right subtrees). Structural recursion is at the heart of important programming
and theorem proving tasks for which neural methods still lag behind symbolic methods, like inferring
semantic relations between datatypes [34, 33].
Drawing on previous work on reverse engineering neural networks [39, 27, 6], we train small transformer
models to solve structural recursion problems and explore the extent to which the models are able to solve
Preprint. Under review.arXiv:2305.14699v1 [cs.LG] 24 May 2023 |
1504.01896.pdf | The Metropolis–Hastings
algorithm
C.P. Robert1,2,3
1Universit´ e Paris-Dauphine,2University of Warwick, and3CREST
Abstract. This article is a self-contained introduction to the Metropolis-
Hastings algorithm, this ubiquitous tool for producing dependent simula-
tions from an arbitrary distribution. The document illustrates the principles
of the methodology on simple examples with R codes and provides entries
to the recent extensions of the method.
Key words and phrases: Bayesian inference, Markov chains, MCMC meth-
ods, Metropolis–Hastings algorithm, intractable density, Gibbs sampler,
Langevin diffusion, Hamiltonian Monte Carlo.
1. INTRODUCTION
There are many reasons why computing an integral like
I(h) =∫
Xh(x)dπ(x),
where dπis a probability measure, may prove intractable, from the shape of
the domainXto the dimension of X(andx), to the complexity of one of the
functionshorπ. Standard numerical methods may be hindered by the same
reasons. Similar difficulties (may) occur when attempting to find the extrema
ofπover the domain X. This is why the recourse to Monte Carlo methods
may prove unavoidable: exploiting the probabilistic nature of πand its weighting
of the domainXis often the most natural and most efficient way to produce
approximations to integrals connected with πand to determine the regions of
the domainXthat are more heavily weighted by π. The Monte Carlo approach
(Hammersley and Handscomb, 1964; Rubinstein, 1981) emerged with computers,
at the end of WWII, as it relies on the ability of producing a large number of
realisations of a random variable distributed according to a given distribution,
taking advantage of the stabilisation of the empirical average predicted by the Law
of Large Numbers. However, producing simulations from a specific distribution
may prove near impossible or quite costly and therefore the (standard) Monte
Carlo may also face intractable situations.
An indirect approach to the simulation of complex distributions and in par-
ticular to the curse of dimensionality met by regular Monte Carlo methods is to
use a Markov chain associated with this target distribution, using Markov chain
theory to validate the convergence of the chain to the distribution of interest and
the stabilisation of empirical averages (Meyn and Tweedie, 1994). It is thus little
surprise that Markov chain Monte Carlo (MCMC) methods have been used for
almost as long as the original Monte Carlo techniques, even though their impact
1arXiv:1504.01896v3 [stat.CO] 27 Jan 2016 |
2402.12479.pdf | 2024-2-21
In deep reinforcement learning, a pruned
network is a good network
Johan Obando-Ceron1,2,3, Aaron Courville2,3and Pablo Samuel Castro1,2,3
1Google DeepMind,2Mila - Québec AI Institute,3Université de Montréal
Recent work has shown that deep reinforcement learning agents have difficulty in effectively using their
network parameters. We leverage prior insights into the advantages of sparse training techniques and
demonstrate that gradual magnitude pruning enables agents to maximize parameter effectiveness. This
results in networks that yield dramatic performance improvements over traditional networks and exhibit
a type of “scaling law”, using only a small fraction of the full network parameters.
1. Introduction
Despite successful examples of deep reinforce-
ment learning (RL) being applied to real-world
problems (Bellemare et al., 2020; Berner et al.,
2019; Fawzi et al., 2022; Mnih et al., 2015;
Vinyals et al., 2019), there is growing evidence
of challenges and pathologies arising when train-
ing these networks (Ceron et al., 2023; Graesser
etal.,2022;Kumaretal.,2021a;Lyleetal.,2022;
Nikishinetal.,2022;Ostrovskietal.,2021;Sokar
et al., 2023). In particular, it has been shown that
deep RL agents under-utilize their network’s pa-
rameters: Kumar et al. (2021a) demonstrated
that there is an implicit underparameterization,
Sokar et al. (2023) revealed that a large num-
ber of neurons go dormant during training, and
Graesseretal.(2022)showedthatsparsetraining
methods can maintain performance with a very
small fraction of the original network parameters.
One of the most surprising findings of this last
workisthatapplyingthegradualmagnitudeprun-
ing technique proposed by Zhu and Gupta (2017)
on DQN (Mnih et al., 2015) with a ResNet back-
bone (as introduced in Impala (Espeholt et al.,
2018)), results in a 50% performance improve-
ment over the dense counterpart, with only 10%
of the original parameters (see the bottom right
panel of Figure 1 of Graesser et al. (2022)). Cu-
riously, when the same pruning technique is ap-
plied to the original CNN architecture there are
no performance improvements, but no degrada-
tion either.
Thatthesamepruningtechniquecanhavesuch
qualitatively different, yet non-negative, results
1 2 3 4 5
Width Scale2.02.53.03.54.04.55.05.5IQM Human Normalized Score
Architecture
Dense
Pruned (95% sparsity)Agent
Rainbow
DQNFigure1|ScalingnetworkwidthsforResNetar-
chitecture ,forDQNandRainbowwithanImpala-
based ResNet. We report the interquantile mean
after 40 million environment steps, aggregated
over 15 games with 5 seeds each; error bars indi-
cate95%stratifiedbootstrapconfidenceintervals.
by simply changing the underlying architecture
is interesting. It suggests that training deep RL
agents with non-standard network topologies (as
inducedbytechniquessuchasgradualmagnitude
pruning) may be generally useful, and warrants
a more profound investigation.
In this paper we explore gradual magnitude
pruning as a general technique for improving the
performance of RL agents. We demonstrate that
in addition to improving the performance of stan-
dard network architectures, the gains increase
proportionally with the size of the base network
architecture. This last point is significant, as deep
RL networks are known to struggle with scaling
architectures (Farebrother et al., 2023; Ota et al.,
2021; Schwarzer et al., 2023; Taiga et al., 2023).
Corresponding author(s): psc@google.com
©2024 Google DeepMind. All rights reservedarXiv:2402.12479v1 [cs.LG] 19 Feb 2024 |
2203.11171.pdf | Published as a conference paper at ICLR 2023
SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT
REASONING IN LANGUAGE MODELS
Xuezhi Wang†‡Jason Wei†Dale Schuurmans†Quoc Le†Ed H. Chi†
Sharan Narang†Aakanksha Chowdhery†Denny Zhou†§
†Google Research, Brain Team
‡xuezhiw@google.com ,§dennyzhou@google.com
ABSTRACT
Chain-of-thought prompting combined with pre-trained large language models has
achieved encouraging results on complex reasoning tasks. In this paper, we propose
a new decoding strategy, self-consistency , to replace the naive greedy decoding
used in chain-of-thought prompting. It first samples a diverse set of reasoning paths
instead of only taking the greedy one, and then selects the most consistent answer
by marginalizing out the sampled reasoning paths. Self-consistency leverages the
intuition that a complex reasoning problem typically admits multiple different ways
of thinking leading to its unique correct answer. Our extensive empirical evaluation
shows that self-consistency boosts the performance of chain-of-thought prompting
with a striking margin on a range of popular arithmetic and commonsense reasoning
benchmarks, including GSM8K (+17.9%), SV AMP (+11.0%), AQuA (+12.2%),
StrategyQA (+6.4%) and ARC-challenge (+3.9%).
1 I NTRODUCTION
Although language models have demonstrated remarkable success across a range of NLP tasks, their
ability to demonstrate reasoning is often seen as a limitation, which cannot be overcome solely by
increasing model scale (Rae et al., 2021; BIG-bench collaboration, 2021, inter alia ). In an effort
to address this shortcoming, Wei et al. (2022) have proposed chain-of-thought prompting , where
a language model is prompted to generate a series of short sentences that mimic the reasoning
process a person might employ in solving a task. For example, given the question “If there are 3
cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?” , instead
of directly responding with “5”, a language model would be prompted to respond with the entire
chain-of-thought: “There are 3 cars in the parking lot already. 2 more arrive. Now there are 3 +
2 = 5 cars. The answer is 5. ” . It has been observed that chain-of-thought prompting significantly
improves model performance across a variety of multi-step reasoning tasks (Wei et al., 2022).
In this paper, we introduce a novel decoding strategy called self-consistency to replace the greedy
decoding strategy used in chain-of-thought prompting (Wei et al., 2022), that further improves
language models’ reasoning performance by a significant margin. Self-consistency leverages the
intuition that complex reasoning tasks typically admit multiple reasoning paths that reach a correct
answer (Stanovich & West, 2000). The more that deliberate thinking and analysis is required for a
problem (Evans, 2010), the greater the diversity of reasoning paths that can recover the answer.
Figure 1 illustrates the self-consistency method with an example. We first prompt the language model
with chain-of-thought prompting, then instead of greedily decoding the optimal reasoning path, we
propose a “sample-and-marginalize” decoding procedure: we first sample from the language model’s
decoder to generate a diverse set of reasoning paths; each reasoning path might lead to a different
final answer, so we determine the optimal answer by marginalizing out the sampled reasoning paths
to find the most consistent answer in the final answer set. Such an approach is analogous to the
human experience that if multiple different ways of thinking lead to the same answer, one has greater
confidence that the final answer is correct. Compared to other decoding methods, self-consistency
avoids the repetitiveness and local-optimality that plague greedy decoding, while mitigating the
stochasticity of a single sampled generation.
1arXiv:2203.11171v4 [cs.CL] 7 Mar 2023 |
2308.07037.pdf | Bayesian Flow Networks
Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, Faustino Gomez
{alex,rupesh,timothy,tino }@nnaisense.com
NNAISENSE
Abstract
This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in
which the parameters of a set of independent distributions are modified with Bayesian inference
in the light of noisy data samples, then passed as input to a neural network that outputs a
second, interdependent distribution. Starting from a simple prior and iteratively updating
the two distributions yields a generative procedure similar to the reverse process of diffusion
models; however it is conceptually simpler in that no forward process is required. Discrete and
continuous-time loss functions are derived for continuous, discretised and discrete data, along
with sample generation procedures. Notably, the network inputs for discrete data lie on the
probability simplex, and are therefore natively differentiable, paving the way for gradient-based
sample guidance and few-step generation in discrete domains such as language modelling. The
loss function directly optimises data compression and places no restrictions on the network
architecture. In our experiments BFNs achieve competitive log-likelihoods for image modelling
on dynamically binarized MNIST and CIFAR-10, and outperform all known discrete diffusion
models on the text8 character-level language modelling task.
1 Introduction
Large-scale neural networks have revolutionised generative modelling over the last few years, with an
unprecedented ability to capture complex relationships among many variables. Building a convincing
joint model of all the pixels in a high resolution image, for example, was impossible before the
advent of modern generative networks.
Key to the expressive power of most of these networks — including autoregressive models [ 9],
flow-based models [ 32], deep VAEs [ 48] and diffusion models [ 41] — is that the joint distribution
they encode is broken down into a series of steps, thereby eluding the “curse of dimensionality”
that would doom any effort to explicitly define all the interactions among so many variables. In
colloquial terms they solve a hard problem by splitting it into easy pieces.
A general way to view such distributions is as an exchange of messages between a sender, Alice,
who has access to some data, and her friend Bob, who wishes to receive it in as few bits as possible.
At each step Alice sends a message to Bob that reveals something about the data. Bob attempts
to guess what the message is: the better his guess the fewer bits are needed to transmit it. After
receiving the message, Bob uses the information he has just gained to improve his guess for the
next message. The loss function is the total number of bits required for all the messages.
In an autoregressive language model, for example, the messages are the word-pieces the text
is divided into. The distribution encoding Bob’s prediction for the first message is of necessity
1arXiv:2308.07037v1 [cs.LG] 14 Aug 2023 |
2310.07269.pdf | Why Does Sharpness-Aware Minimization Generalize
Better Than SGD?
Zixiang Chen∗Junkai Zhang∗Yiwen Kou Xiangning Chen Cho-Jui Hsieh Quanquan Gu
Department of Computer Science
University of California, Los Angeles
Los Angeles, CA 90095
{chenzx19,zhang,evankou,xiangning,chohsieh,qgu}@cs.ucla.edu
Abstract
The challenge of overfitting, in which the model memorizes the training data
and fails to generalize to test data, has become increasingly significant in the
training of large neural networks. To tackle this challenge, Sharpness-Aware
Minimization (SAM) has emerged as a promising training method, which can
improve the generalization of neural networks even in the presence of label noise.
However, a deep understanding of how SAM works, especially in the setting of
nonlinear neural networks and classification tasks, remains largely missing. This
paper fills this gap by demonstrating why SAM generalizes better than Stochastic
Gradient Descent (SGD) for a certain data model and two-layer convolutional
ReLU networks. The loss landscape of our studied problem is nonsmooth, thus
current explanations for the success of SAM based on the Hessian information
are insufficient. Our result explains the benefits of SAM, particularly its ability
to prevent noise learning in the early stages, thereby facilitating more effective
learning of features. Experiments on both synthetic and real data corroborate our
theory.
1 Introduction
The remarkable performance of deep neural networks has sparked considerable interest in creating
ever-larger deep learning models, while the training process continues to be a critical bottleneck
affecting overall model performance. The training of large models is unstable and difficult due to the
sharpness, non-convexity, and non-smoothness of its loss landscape. In addition, as the number of
model parameters is much larger than the training sample size, the model has the ability to memorize
even randomly labeled data (Zhang et al., 2021), which leads to overfitting. Therefore, although
traditional gradient-based methods like gradient descent (GD) and stochastic gradient descent (SGD)
can achieve generalizable models under certain conditions, these methods may suffer from unstable
training and harmful overfitting in general.
To overcome the above challenge, Sharpness-Aware Minimization (SAM) (Foret et al., 2020), an
innovative training paradigm, has exhibited significant improvement in model generalization and has
become widely adopted in many applications. In contrast to traditional gradient-based methods that
primarily focus on finding a point in the parameter space with a minimal gradient norm, SAM also
pursues a solution with reduced sharpness, characterized by how rapidly the loss function changes
locally. Despite the empirical success of SAM across numerous tasks (Bahri et al., 2021; Behdin
et al., 2022; Chen et al., 2021; Liu et al., 2022a), the theoretical understanding of this method remains
limited.
Foret et al. (2020) provided a PAC-Bayes bound on the generalization error of SAM to show that it will
generalize well, while the bound only holds for the infeasible average-direction perturbation instead of
∗Equal contribution.
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2310.07269v1 [cs.LG] 11 Oct 2023 |
2312.11514.pdf | LLM in a flash :
Efficient Large Language Model Inference with Limited Memory
Keivan Alizadeh, Iman Mirzadeh∗, Dmitry Belenko∗, S. Karen Khatamifard,
Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, Mehrdad Farajtabar
Apple†
Abstract
Large language models (LLMs) are central to
modern natural language processing, delivering
exceptional performance in various tasks.
However, their substantial computational and
memory requirements present challenges,
especially for devices with limited DRAM
capacity. This paper tackles the challenge
of efficiently running LLMs that exceed the
available DRAM capacity by storing the model
parameters in flash memory, but bringing them
on demand to DRAM. Our method involves
constructing an inference cost model that takes
into account the characteristics of flash mem-
ory, guiding us to optimize in two critical areas:
reducing the volume of data transferred from
flash and reading data in larger, more contigu-
ous chunks. Within this hardware-informed
framework, we introduce two principal
techniques. First, “windowing” strategically
reduces data transfer by reusing previously
activated neurons, and second, “row-column
bundling”, tailored to the sequential data access
strengths of flash memory, increases the size
of data chunks read from flash memory. These
methods collectively enable running models
up to twice the size of the available DRAM,
with a 4-5x and 20-25x increase in inference
speed compared to naive loading approaches in
CPU and GPU, respectively. Our integration of
sparsity awareness, context-adaptive loading,
and a hardware-oriented design paves the way
for effective inference of LLMs on devices
with limited memory.
1 Introduction
In recent years, large language models (LLMs),
such as GPT-3 (Brown et al., 2020), OPT (Zhang
et al., 2022b), and PaLM (Chowdhery et al., 2022),
have demonstrated strong performance across a
wide range of natural language tasks. However, the
∗Major Contribution
†{kalizadehvahid, imirzadeh, d_belenko, skhatamifard,
minsik, cdelmundo, mrastegari, farajtabar}@apple.com
Naive
Falcon 7B
(CPU)Ours Naive
OPT 6.7B
(CPU)Ours Naive
OPT6.7B
(GPU)Ours10045070022503100Inference Latency (ms)Compute Load From Flash Memory ManagementFigure 1: Inference latency of 1 token when half the
memory of the model is available. Our method selec-
tively loads parameters on demand per token generation
step. The latency is the time needed to load from flash
multiple times back and forth during the generation of
all tokens and the time needed for the computations,
averaged over all generated tokens.
unprecedented capabilities of these models come
with substantial computational and memory re-
quirements for inference. LLMs can contain hun-
dreds of billions or even trillions of parameters,
which makes them challenging to load and run effi-
ciently, especially on resource-constrained devices.
Currently, the standard approach is to load the en-
tire model into DRAM (Dynamic Random Access
Memory) for inference (Rajbhandari et al., 2021;
Aminabadi et al., 2022). However, this severely
limits the maximum model size that can be run.
For example, a 7 billion parameter model requires
over 14GB of memory just to load the parameters
in half-precision floating point format, exceeding
the capabilities of most edge devices.
To address this limitation, we propose to store
the model parameters in flash memory, which is
at least an order of magnitude larger than DRAM.
Then, during inference, we directly load the re-
quired subset of parameters from the flash mem-
1arXiv:2312.11514v2 [cs.CL] 4 Jan 2024 |
10.1101.2022.05.17.492325.pdf | Inferring Neural Activity Before Plasticity: A Foundation 1
for Learning Beyond Backpropagation 2
Yuhang Song1,2,*, Beren Millidge2, Tommaso Salvatori1, Thomas Lukasiewicz1,*, Zhenghua Xu1,3, and 3
Rafal Bogacz2,*4
1Department of Computer Science, University of Oxford, Oxford, United Kingdom 5
2Medical Research Council Brain Networks Dynamics Unit, University of Oxford, Oxford, United Kingdom 6
3State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China 7
*Corresponding authors: yuhang.song@bndu.ox.ac.uk; thomas.lukasiewicz@cs.ox.ac.uk; rafal.bogacz@ndcn.ox.ac.uk 8
Abstract9
For both humans and machines, the essence of learning is to pinpoint which components in its information
processing pipeline are responsible for an error in its output — a challenge that is known as credit
assignment. How the brain solves credit assignment is a key question in neuroscience, and also of
significant importance for artificial intelligence. It has long been assumed that credit assignment is
best solved by backpropagation, which is also the foundation of modern machine learning. However,
it has been questioned whether it is possible for the brain to implement backpropagation and learning
in the brain may actually be more efficient and effective than backpropagation. Here, we set out a
fundamentally different principle on credit assignment, called prospective configuration. In prospective
configuration, the network first infers the pattern of neural activity that should result from learning, and
then the synaptic weights are modified to consolidate the change in neural activity. We demonstrate
that this distinct mechanism, in contrast to backpropagation, (1) underlies learning in a well-established
family of models of cortical circuits, (2) enables learning that is more efficient and effective in many
contexts faced by biological organisms, and (3) reproduces surprising patterns of neural activity and
behaviour observed in diverse human and animal learning experiments. Our findings establish a new
foundation for learning beyond backpropagation, for both understanding biological learning and building
artificial intelligence.10
The credit assignment problem1lies at the very heart of learning. Backpropagation2–5, as a simple 11
yet effective credit assignment theory, has powered notable advances in artificial intelligence since its 12
inception6–11. It has also gained a predominant place in understanding learning in the brain1,12–21. Due to 13
this success, much recent work has focused on understanding how biological neural networks could learn in 14
a way similar to backpropagation22–31: although many proposed models do not implement backpropagation 15
exactly, they nevertheless try to approximate backpropagation, and much emphasis is placed on how 16
close this approximation is22–28, 32–34. However, learning in the brain is superior to backpropagation 17
in many critical aspects — for example, compared to the brain, backpropagation requires many more 18
exposures to a stimulus to learn35and suffers from catastrophic interference of newly and previously 19
stored information36,37. This raises the question of whether using backpropagation to understand learning 20
in the brain should be the main focus of the field. 21
Here, we propose that the brain instead solves credit assignment with a fundamentally different 22
principle, which we call prospective configuration. In prospective configuration, before synaptic weights 23
are modified, neural activity changes across the network so that output neurons better predict the target 24
output; only then are the synaptic weights (weights, for short) modified to consolidate this change in 25
neural activity. By contrast, in backpropagation the order is reversed — weight modification takes the lead 26
1(which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint this version posted October 12, 2022. ; https://doi.org/10.1101/2022.05.17.492325doi: bioRxiv preprint |
2402.01613.pdf | Nomic Embed: Training a Reproducible Long Context Text Embedder
Zach Nussbaum
zach@nomic.aiJohn X. Morris
jack@nomic.ai
jxm3@cornell.edu
Brandon Duderstadt
brandon@nomic.aiAndriy Mulyar
andriy@nomic.ai
Abstract
This technical report describes the training
of nomic-embed-text-v1, the first fully re-
producible, open-source, open-weights, open-
data, 8192 context length English text em-
bedding model that outperforms both OpenAI
Ada-002 and OpenAI text-embedding-3-small
on short and long-context tasks. We release
the training code and model weights under
an Apache 2 license. In contrast with other
open-source models, we release a training data
loader with 235 million curated text pairs that
allows for the full replication of nomic-embed-
text-v1. You can find code and data to repli-
cate the model at https://github.com/nomic-
ai/contrastors.
1 Introduction
Text embeddings are an integral component of
modern NLP applications powering retrieval-
augmented-generation (RAG) for LLMs and se-
mantic search (Lewis et al., 2021a; Izacard et al.,
2022b; Ram et al., 2023). These embeddings en-
code semantic information about sentences or doc-
uments as low-dimensional vectors that are used
in downstream applications, such as clustering for
data visualization, classification, and information
retrieval.
The majority of the top open-source models
on the MTEB benchmark (Muennighoff et al.,
2023) are limited to context lengths of 512, such
as E5 Wang et al. (2022), GTE Li et al. (2023),
and BGE Xiao et al. (2023). This short context
length reduces model utility in domains where
overall document semantics are not localized to
sentences or paragraphs. Most top embedding
models with a context length longer than 2048
are closed-source, such as V oyage-lite-01-instruct
V oyage (2023) and text-embedding-ada-002 Nee-
lakantan et al. (2022).
The top two performing open-source long con-
text embedding models are jina-embedding-v2-50 55 60 65 70 75 80 85JinaLCLoCoMTEB
60.99
52.7
55.2562.26
82.4
58.260.39
85.45
51.962.39
85.53
54.16Nomic Embed
Jina Base V2
text-embedding-3-small
text-embedding-ada
Figure 1: Text Embedding Model Benchmarks. Ag-
gregate performance of nomic-embed-text-v1, OpenAI
text-embedding-ada, OpenAI text-embedding-3-small
and jina-embedding-base-v2 on short and long con-
text benchmarks. Nomic Embed is the only fully au-
ditable long-context model that exceeds OpenAI text-
embedding-ada, OpenAI text-embedding-3-small, and
Jina performance across both short and long context
benchmarks. X-axis units vary per benchmark suite.
base-en G ¨unther et al. (2024) and E5-Mistral-7b-
instruct Wang et al. (2023b).
Unfortunately, jina-embedding-v2-base does
not surpass OpenAI’s text-embedding-ada-002
Neelakantan et al. (2022) (see Table 1). Further,
E5-Mistral Wang et al. (2023b) is not feasible to
use in many engineering applications due to the
large inference requirements of a 7B parameter
transformer, and is not recommended for use be-
yond 4096 tokens.
This report describes how we trained nomic-
embed-text-v1, a 137M parameter, open-source,
open-weights, open-data, 8192 sequence length
model that surpasses OpenAI text-embedding-ada
and text-embedding-3-small performance on both
short and long context benchmarks (Table 1). We
release the model weights and codebase under an
Apache-2 license. We additionally release our
curated training dataset to enable end-to-end au-
ditability and replication of the model.arXiv:2402.01613v1 [cs.CL] 2 Feb 2024 |
2309.16039.pdf | Effective Long-Context Scaling of Foundation Models
Wenhan Xiong†∗, Jingyu Liu†, Igor Molybog,
Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta,
Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang,
Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan,
Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang∗, Hao Ma∗
Meta
Abstract
We present a series of long-context LLMs that support effective context windows
of up to 32,768 tokens. Our model series are built through continual pretraining
from LLAMA 2with longer training sequences and on a dataset where long texts
are upsampled. We perform extensive evaluation on language modeling, synthetic
context probing tasks, and a wide range of research benchmarks. On research
benchmarks, our models achieve consistent improvements on most regular tasks
and significant improvements on long-context tasks over LLAMA 2. Notably, with
a cost-effective instruction tuning procedure that does not require human-annotated
long instruction data, the 70B variant can already surpass gpt-3.5-turbo-16k ’s
overall performance on a suite of long-context tasks. Alongside these results, we
provide an in-depth analysis on the individual components of our method. We
delve into LLAMA ’s position encodings and discuss its limitation in modeling
long dependencies. We also examine the impact of various design choices in
the pretraining process, including the data mix and the training curriculum of
sequence lengths – our ablation experiments suggest that having abundant long
texts in the pretrain dataset is notthe key to achieving strong performance, and
we empirically verify that long context continual pretraining is more efficient and
similarly effective compared to pretraining from scratch with long sequences.
100101102103104
Context length23456Validation loss
Llama 2 Long 7B
= 25.4, = 0.45, = 1.56
Llama 2 Long 13B
= 19.5, = 0.48, = 1.45
Llama 2 Long 34B
= 17.7, = 0.50, = 1.41
Llama 2 Long 70B
= 17.9, = 0.51, = 1.35
Figure 1: We show that our model’s validation loss can be fit as a function of the context length:
L(c) = (α
c)β+γwith a different set of α, β, γ for each model size. This power-law relationship also
suggests that context length is another important axis of scaling LLMs and our model can continually
improve its performance as we increase the context length up to 32,768 tokens.arXiv:2309.16039v1 [cs.CL] 27 Sep 2023 |
blei03a.pdf | Journalof Machine Learning Research 3 (2003)993-1022 Submitted 2/02; Published 1/03
Latent Dirichlet Allocation
David M. Blei BLEI@CS.BERKELEY .EDU
Computer Science Division
University of CaliforniaBerkeley, CA 94720, USA
Andrew Y. Ng ANG@CS.STANFORD .EDU
Computer Science DepartmentStanford UniversityStanford, CA 94305, USA
Michael I.Jordan JORDAN@CS.BERKELEY .EDU
Computer Science Division and Department of Statistics
University of CaliforniaBerkeley, CA 94720, USA
Editor:John Lafferty
Abstract
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of
discretedatasuchastextcorpora. LDAisathree-levelhierarchicalBayesianmodel,inwhicheachitemofacollectionismodeledasafinitemixtureoveranunderlyingsetoftopics. Eachtopicis,inturn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context oftext modeling, the topic probabilities provide an explicit representation of a document. We presentefficient approximate inference techniques based on variational methods and an EM algorithm forempiricalBayesparameterestimation. Wereportresultsindocumentmodeling,textclassification,and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSImodel.
1. Introduction
In this paper we consider the problem of modeling text corpora and other collections of discrete
data. The goal is to find short descriptions of the members of a collection that enable efficientprocessing of large collections while preserving the essential statistical relationships that are usefulforbasictaskssuchasclassification,noveltydetection,summarization,andsimilarityandrelevancejudgments.
Significant progress has been made on this problem by researchers in the field of informa-
tion retrieval (IR) (Baeza-Yates and Ribeiro-Neto, 1999). The basic methodology proposed byIR researchers for text corpora—a methodology successfully deployed in modern Internet searchengines—reduces each document in the corpus to a vector of real numbers, each of which repre-sents ratios of counts. In the popular tf-idfscheme (Salton and McGill, 1983), a basic vocabulary
of “words” or “terms” is chosen, and, for each document in the corpus, a count is formed of thenumber of occurrences of each word. After suitable normalization, this term frequency count iscomparedtoaninversedocumentfrequencycount,whichmeasuresthenumberofoccurrencesofa
c⃝2003David M. Blei, Andrew Y. Ng and Michael I.Jordan. |
2204.06860.pdf | AlphaFold2 can predict single-mutation effects
John M. McBride,1,∗Konstantin Polev,1, 2Amirbek Abdirasulov,3
Vladimir Reinharz,4Bartosz A. Grzybowski,1, 5, †and Tsvi Tlusty1, 5, ‡
1Center for Soft and Living Matter, Institute for Basic Science, Ulsan 44919, South Korea
2Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, South Korea
3Department of Computer Science and Engineering,
Ulsan National Institute of Science and Technology, Ulsan 44919, South Korea
4Université du Québec à Montréal, Canada
5Departments of Physics and Chemistry, Ulsan National Institute of Science and Technology, Ulsan 44919, South Korea
AlphaFold2 (AF) is a promising tool, but is it accurate enough to predict single mutation effects? Here, we
report that the localized structural deformation between protein pairs differing by only 1-3 mutations – as mea-
sured by the effective strain – is correlated across 3,901 experimental and AF-predicted structures. Furthermore,
analysis of ∼11,000 proteins shows that the local structural change correlates with various phenotypic changes.
These findings suggest that AF can predict the range and magnitude of single-mutation effects on average, and
we propose a method to improve precision of AF predictions and to indicate when predictions are unreliable.
Alteration of one or few amino acid residues can af-
fect structure [1–3] and function [4, 5] of a protein and,
in extreme cases, be the difference between health and dis-
ease [6, 7]. Understanding structural consequences of point
mutations is important for drug design [8, 9] and could also
accelerate optimization of enzymatic function via directed
evolution [10, 11]. In these and other applications, theoret-
ical models [12] could be of immense help, provided they
are sufficiently accurate. In this context, AlphaFold2 [13]
has recently made breakthroughs in predicting global pro-
tein structure from sequence with unprecedented precision.
Notwithstanding, it is not yet known whether AF is sen-
sitive enough to detect small, local effects of single muta-
tions. Even if AF achieves high accuracy, the effect of a
mutation may be small compared to the inherent confor-
mational dynamics of the protein – predicting static struc-
tures may not be particularly informative [14–16]. Further-
more, as accuracy improves, evaluating the quality of pre-
dictions becomes increasingly complicated by the inherent
noise in experimental measurements [16–23]. So far, no
study has evaluated whether AF can accurately measure
structural changes due to single mutations, and there are
conflicting reports as to whether AF can predict the effect
of a mutation on protein stability [24–28]. Furthermore, re-
cent evidence suggests that AF learns the energy functional
underlying folding, raising the question of whether the in-
ferred functional is sensitive enough to discern the subtle
physical changes due to a single mutation [29]. We aim to
resolve this issue by comparing AF predictions with exten-
sive data on protein structure and function.
We examine AF predictions in light of structural data
from a curated set of proteins from the Protein Data Bank
(PDB) [30], and phenotype data from high-throughput ex-
periments [31–33]. We find that AF can detect the ef-
fect of a mutation on structure by identifying local de-
formations between protein pairs differing by 1-3 muta-
tions. The deformation is probed by the effective strain
(ES) measure. We show that ES computed between a
pair of PDB structures is correlated with the ES computed
for the corresponding pair of structures predicted by AF.
Furthermore, analysis of ∼11,000 proteins whose function
was probed in three high-throughput studies shows sig-
nificant correlations between AF-predicted ES and three
categories of phenotype (fluorescence, folding, catalysis)across three experimental data sets [31–33]. These sets of
correlations suggest that AF can predict the range and mag-
nitude of single-mutation effects. We provide new tools
(github.com/mirabdi/PDAnalysis) for computing deforma-
tion in proteins, and a methodology for increasing the pre-
cision of AlphaFold predictions of mutation effects. Alto-
gether, these results indicate that AF can be used to pre-
dict physicochemical effects of missense mutations, un-
damming vast potential in the field of protein design and
evolution.
AF can predict local structural change.— We illustrate our
approach by analyzing wild-type (WT; 6BDD_A ) and single-
mutant ( 6BDE_A , A71G) structures of H-NOX protein from
K. algicida (Fig. 1D) [34]. To quantify local deformation,
we calculate the effective strain (ES) per residue Si(See
App. A) for, respectively, experimental and AF-predicted
pairs of structures (Fig. 1A). The ES is the mean relative
change in distance from C αof residue ito neighboring
Cαpositions within a range of 13 Å. ES provides a ro-
bust estimate of the magnitude of local strain, which ac-
counts also for non-affine deformation in addition to affine
deformation [35–39]. Like the frame-aligned-point-error
(FAPE) measure used in training AF [13], ES is invariant
to alignment. In H-NOX, we observe that the Siis highest
at, and decays away from the mutated site, showing a cor-
relation with the distance from the mutated site (Fig. 1B).
We find that Siis correlated across PDB and AF structures
(Fig. 1C,E). Taken together, these correlations suggest that
Siis a sensitive measure of local structural change, and that
AF is capable of predicting such structural change upon
mutation.
Experimental measurement variability limits evaluation.—
Before exploring AF predictions in more detail, we first ex-
amine variation within experimental structures by compar-
ing repeat measurements of the same protein. In Fig. 1F we
show the distribution of Sicalculated for all residues in all
pairs (Supplemental Material (SM) Sec. 1A [40]) of pro-
tein structures with identical sequences (number of muta-
tions, M=0); we excluded pairs where the crystallographic
group differed (SM Sec. 1B [40]). Protein structures vary
considerably between repeat measurements (average ES is
⟨Si⟩=0.018, and the average Root Mean Square Deviation
is RMSD =0.24Å). In comparison, differences between
repeat predictions of AF are much lower ( ∆Si=0.005,arXiv:2204.06860v7 [q-bio.BM] 21 Oct 2023 |
2404.14619.pdf | OpenELM: An Efficient Language Model Family with Open-source Training
and Inference Framework
Sachin Mehta Mohammad Hossein Sekhavat Qingqing Cao Maxwell Horton
Yanzi Jin Chenfan Sun Iman Mirzadeh Mahyar Najibi Dmitry Belenko
Peter Zatloukal Mohammad Rastegari
Apple
Model Public datasetOpen-sourceModel size Pre-training tokens Average acc. (in %)
Code Weights
OPT [55] ✗ ✓ ✓ 1.3 B 0.2 T 41.49
PyThia [5] ✓ ✓ ✓ 1.4 B 0.3 T 41.83
MobiLlama [44] ✓ ✓ ✓ 1.3 B 1.3 T 43.55
OLMo [17] ✓ ✓ ✓ 1.2 B 3.0 T 43.57
OpenELM (Ours) ✓ ✓ ✓ 1.1 B 1.5 T 45.93
Table 1. OpenELM vs. public LLMs. OpenELM outperforms comparable-sized existing LLMs pretrained on publicly available datasets.
Notably, OpenELM outperforms the recent open LLM, OLMo, by 2.36% while requiring 2×fewer pre-training tokens. The average
accuracy is calculated across multiple tasks listed in Tab. 3b, which are also part of the OpenLLM leaderboard [4]. Models pretrained with
less data are highlighted in gray color.
Abstract
The reproducibility and transparency of large language
models are crucial for advancing open research, ensuring
the trustworthiness of results, and enabling investigations
into data and model biases, as well as potential risks. To
this end, we release OpenELM, a state-of-the-art open lan-
guage model. OpenELM uses a layer-wise scaling strategy
to efficiently allocate parameters within each layer of the
transformer model, leading to enhanced accuracy. For ex-
ample, with a parameter budget of approximately one bil-
lion parameters, OpenELM exhibits a 2.36% improvement
in accuracy compared to OLMo while requiring 2×fewer
pre-training tokens.
Diverging from prior practices that only provide model
weights and inference code, and pre-train on private
datasets, our release includes the complete framework for
training and evaluation of the language model on publicly
available datasets, including training logs, multiple check-
points, and pre-training configurations. We also release
code to convert models to MLX library for inference and
fine-tuning on Apple devices. This comprehensive release
aims to empower and strengthen the open research commu-
nity, paving the way for future open research endeavors.
Our source code along with pre-trained model weights
and training recipes is available at https://github.
com/apple/corenet . Additionally, OpenELM mod-els can be found on HuggingFace at: https : / /
huggingface.co/apple/OpenELM .
1. Introduction
Transformer-based [48] large language models (LLM)
are revolutionizing the field of natural language processing
[7, 46]. These models are isotropic, meaning that they have
the same configuration ( e.g., number of heads and feed-
forward network dimensions) for each transformer layer.
Though such isotropic models are simple, they may not al-
locate parameters efficiently inside the model.
In this work, we develop and release OpenELM, a fam-
ily of pre-trained and fine-tuned models on publicly avail-
able datasets. At the core of OpenELM lies layer-wise
scaling [30], enabling more efficient parameter allocation
across layers. This method utilizes smaller latent dimen-
sions in the attention and feed-forward modules of the trans-
former layers closer to the input, and gradually widening the
layers as they approach the output.
We release the complete framework, encompassing data
preparation, training, fine-tuning, and evaluation proce-
dures, alongside multiple pre-trained checkpoints and train-
ing logs, to facilitate open research. Importantly, OpenELM
outperforms existing open LLMs that are pre-trained us-
ing publicly available datasets (Tab. 1). For example,
OpenELM with 1.1 billion parameters outperforms OLMo
1arXiv:2404.14619v1 [cs.CL] 22 Apr 2024 |
1312.6114.pdf | Auto-Encoding Variational Bayes
Diederik P. Kingma
Machine Learning Group
Universiteit van Amsterdam
dpkingma@gmail.comMax Welling
Machine Learning Group
Universiteit van Amsterdam
welling.max@gmail.com
Abstract
How can we perform efficient inference and learning in directed probabilistic
models, in the presence of continuous latent variables with intractable posterior
distributions, and large datasets? We introduce a stochastic variational inference
and learning algorithm that scales to large datasets and, under some mild differ-
entiability conditions, even works in the intractable case. Our contributions are
two-fold. First, we show that a reparameterization of the variational lower bound
yields a lower bound estimator that can be straightforwardly optimized using stan-
dard stochastic gradient methods. Second, we show that for i.i.d. datasets with
continuous latent variables per datapoint, posterior inference can be made espe-
cially efficient by fitting an approximate inference model (also called a recogni-
tion model) to the intractable posterior using the proposed lower bound estimator.
Theoretical advantages are reflected in experimental results.
1 Introduction
How can we perform efficient approximate inference and learning with directed probabilistic models
whose continuous latent variables and/or parameters have intractable posterior distributions? The
variational Bayesian (VB) approach involves the optimization of an approximation to the intractable
posterior. Unfortunately, the common mean-field approach requires analytical solutions of expecta-
tions w.r.t. the approximate posterior, which are also intractable in the general case. We show how a
reparameterization of the variational lower bound yields a simple differentiable unbiased estimator
of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for ef-
ficient approximate posterior inference in almost any model with continuous latent variables and/or
parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques.
For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-
Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially
efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very
efficient approximate posterior inference using simple ancestral sampling, which in turn allows us
to efficiently learn the model parameters, without the need of expensive iterative inference schemes
(such as MCMC) per datapoint. The learned approximate posterior inference model can also be used
for a host of tasks such as recognition, denoising, representation and visualization purposes. When
a neural network is used for the recognition model, we arrive at the variational auto-encoder .
2 Method
The strategy in this section can be used to derive a lower bound estimator (a stochastic objective
function) for a variety of directed graphical models with continuous latent variables. We will restrict
ourselves here to the common case where we have an i.i.d. dataset with latent variables per datapoint,
and where we like to perform maximum likelihood (ML) or maximum a posteriori (MAP) inference
on the (global) parameters, and variational inference on the latent variables. It is, for example,
1arXiv:1312.6114v11 [stat.ML] 10 Dec 2022 |
2103.04047.pdf | Reinforcement Learning, Bit by Bit
Suggested Citation: Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla,
Morteza Ibrahimi, Ian Osband and Zheng Wen (2018), “Reinforcement Learning, Bit by
Bit”, : Vol. xx, No. xx, pp 1–18. DOI: 10.1561/XXXXXXXXX.
Xiuyuan Lu
DeepMind
lxlu@deepmind.comBenjamin Van Roy
DeepMind
benvanroy@deepmind.com
Vikranth Dwaracherla
DeepMind
vikranthd@deepmind.comMorteza Ibrahimi
DeepMind
mibrahimi@deepmind.com
Ian Osband
DeepMind
iosband@deepmind.comZheng Wen
DeepMind
zhengwen@deepmind.com
This article may be used only for the purpose of research, teaching,
and/or private study. Commercial use or systematic downloading
(by robots or other automatic processes) is prohibited without ex-
plicit Publisher approval.Boston — DelftarXiv:2103.04047v8 [cs.LG] 4 May 2023 |
10.1101.2020.12.15.422761.pdf | TRANSFORMER PROTEIN LANGUAGE MODELS ARE
UNSUPERVISED STRUCTURE LEARNERS
Roshan Rao∗
UC Berkeley
rmrao@berkeley.eduJoshua Meier
Facebook AI Research
jmeier@fb.comTom Sercu
Facebook AI Research
tsercu@fb.com
Sergey Ovchinnikov
Harvard University
so@g.harvard.eduAlexander Rives
Facebook AI Research & New York University
arives@cs.nyu.edu
ABSTRACT
Unsupervised contact prediction is central to uncovering physical, structural, and
functional constraints for protein structure determination and design. For decades,
the predominant approach has been to infer evolutionary constraints from a set of
related sequences. In the past year, protein language models have emerged as a po-
tential alternative, but performance has fallen short of state-of-the-art approaches
in bioinformatics. In this paper we demonstrate that Transformer attention maps
learn contacts from the unsupervised language modeling objective. We find the
highest capacity models that have been trained to date already outperform a state-
of-the-art unsupervised contact prediction pipeline, suggesting these pipelines can
be replaced with a single forward pass of an end-to-end model.1
1 I NTRODUCTION
Unsupervised modeling of protein contacts has an important role in computational protein de-
sign (Russ et al., 2020; Tian et al., 2018; Blazejewski et al., 2019) and is a central element of
all current state-of-the-art structure prediction methods (Wang et al., 2017; Senior et al., 2020; Yang
et al., 2019). The standard bioinformatics pipeline for unsupervised contact prediction includes mul-
tiple components with specialized tools and databases that have been developed and optimized over
decades. In this work we propose replacing the current multi-stage pipeline with a single forward
pass of a pre-trained end-to-end protein language model.
In the last year, protein language modeling with an unsupervised training objective has been inves-
tigated by multiple groups (Rives et al., 2019; Alley et al., 2019; Heinzinger et al., 2019; Rao et al.,
2019; Madani et al., 2020). The longstanding practice in bioinformatics has been to fit linear models
on focused sets of evolutionarily related and aligned sequences; by contrast, protein language model-
ing trains nonlinear deep neural networks on large databases of evolutionarily diverse and unaligned
sequences. High capacity protein language models have been shown to learn underlying intrinsic
properties of proteins such as structure and function from sequence data (Rives et al., 2019).
A line of work in this emerging field proposes the Transformer for protein language modeling (Rives
et al., 2019; Rao et al., 2019). Originally developed in the NLP community to represent long range
context, the main innovation of the Transformer model is its use of self-attention (Vaswani et al.,
2017). Self-attention has particular relevance for the modeling of protein sequences. Unlike convo-
lutional and recurrent LSTM models, the Transformer constructs a pairwise interaction map between
all positions in the sequence. In principle this mechanism has an ideal form to model residue-residue
contacts.
In theory, end-to-end learning with a language model has advantages over the bioinformatics
pipeline: (i) it replaces the expensive query, alignment, and training steps with a single forward
∗Work performed during an internship at Facebook.
1Weights for all ESM-1 and ESM-1b models, as well as regressions trained on these models can be found
athttps://github.com/facebookresearch/esm.
1. CC-BY-NC-ND 4.0 International license perpetuity. It is made available under apreprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in The copyright holder for this this version posted December 15, 2020. ; https://doi.org/10.1101/2020.12.15.422761doi: bioRxiv preprint |
2401.16405.pdf | Scaling Sparse Fine-Tuning to Large Language Models
Alan Ansell1Ivan Vuli ´c1Hannah Sterz1Anna Korhonen1Edoardo M. Ponti2 1
Abstract
Large Language Models (LLMs) are difficult to
fully fine-tune (e.g., with instructions or human
feedback) due to their sheer number of parameters.
A family of parameter -efficient sparse fine-tuning
methods have proven promising in terms of per-
formance but their memory requirements increase
proportionally to the size of the LLMs. In this
work, we scale sparse fine-tuning to state-of-the-
art LLMs like LLaMA 2 7B and 13B. We propose
SpIEL, a novel sparse fine-tuning method which,
for a desired density level, maintains an array of
parameter indices and the deltas of these parame-
ters relative to their pretrained values. It iterates
over: (a) updating the active deltas, (b) pruning
indices (based on the change of magnitude of their
deltas) and (c) regrowth of indices. For regrowth,
we explore two criteria based on either the accu-
mulated gradients of a few candidate parameters
or their approximate momenta estimated using
the efficient SM3 optimizer. We experiment with
instruction-tuning of LLMs on standard dataset
mixtures, finding that SpIEL is often superior to
popular parameter-efficient fine-tuning methods
like LoRA (low-rank adaptation) in terms of per-
formance and comparable in terms of run time.
We additionally show that SpIEL is compatible
with both quantization and efficient optimizers, to
facilitate scaling to ever-larger model sizes. We re-
lease the code for SpIEL at https://github.
com/AlanAnsell/peft and for the instruc-
tion-tuning experiments at https://github.
com/ducdauge/sft-llm .
1. Introduction
The scale of Large Language Models (LLMs), such as Fal-
con (Almazrouei et al., 2023), LLaMA 2 (Touvron et al.,
2023), and Mistral (Jiang et al., 2023), is one of the keys
1University of Cambridge2University of Edinburgh. Corre-
spondence to: Alan Ansell <aja63@cam.ac.uk >.to their state-of-the-art performance (Kaplan et al., 2020).
However, this scale is both a blessing and a curse as tailor-
ing LLMs to specific applications via fine-tuning presents
a formidable challenge: if performed na ¨ıvely, this incurs
the cost of updating an incredibly large set of parameters.
A family of lightweight methods for LLM adaptation have
been proposed to mitigate this issue, known collectively
as Parameter-Efficient Fine-Tuning (PEFT). PEFT meth-
ods learn a small number of new parameters, denoted as ϕ,
which augment the frozen LLM weights θ(Pfeiffer et al.,
2023; Lialin et al., 2023). For instance, Low-Rank Adapters
(LoRA; Hu et al., 2022) learn additional low-rank matrices
to modify the linear layers in Transformer blocks.
PEFT methods based on unstructured sparse fine-tuning
(SFT1), where ϕis a sparse vector added to θ, have recently
shown promise (Sung et al., 2021; Guo et al., 2021; Ansell
et al., 2022). These offer a strong trade-off between low
number of parameters and high model performance without
inserting additional layers into the LLM’s neural architec-
ture, which would reduce model efficiency. In addition,
multiple SFTs are composable while avoiding interference
(Ansell et al., 2022), which facilitates the integration of
multiple sources of knowledge into LLMs. Formally, SFT
can be conceived of as performing joint optimization over
the fixed-size set of non-zero indices of ϕand their deltas
with respect to the LLM weights. Due to the intricacies of
this optimization, however, SFT has so far been severely
limited by a major drawback, namely, its high memory re-
quirements: existing methods for selecting non-zero indices
include learning a mask (Sung et al., 2021), estimating the
Fisher information (Guo et al., 2021), or calculating the
difference between initialization and convergence (Ansell
et al., 2022) for all LLM parameters . Hence, SFT is not
currently suitable for adapting LLM at large scales.
The main goal of this work is to overcome these challenges
by devising memory-efficient methods to update Large Lan-
guage Models (LLMs) sparsely, while maintaining perfor-
mance benefits, that is, retaining the same performance of
full-model fine-tuning or even surpassing it. Specifically,
we wish for the memory use during training (beyond that
required to store the pretrained model weights) to scale
1Note the unfortunate confusion of nomenclature with super-
vised fine-tuning (also frequently referred to as SFT).
1arXiv:2401.16405v2 [cs.CL] 2 Feb 2024 |
2308.03296.pdf | Studying Large Language Model Generalization
with Influence Functions
Roger Grosse˚:, Juhan Bae˚:, Cem Anil˚:
Nelson Elhage;
Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus,
Ethan Perez, Evan Hubinger, Kamil˙ e Lukoši¯ ut˙ e, Karina Nguyen, Nicholas Joseph,
Sam McCandlish
Jared Kaplan, Samuel R. Bowman
Abstract
When trying to gain better visibility into a machine learning model in order to understand
and mitigate the associated risks, a potentially valuable source of evidence is: which
training examples most contribute to a given behavior? Influence functions aim to answer a
counterfactual: how would the model’s parameters (and hence its outputs) change if a given
sequence were added to the training set? While influence functions have produced insights for
small models, they are difficult to scale to large language models (LLMs) due to the difficulty
of computing an inverse-Hessian-vector product (IHVP). We use the Eigenvalue-corrected
Kronecker-Factored Approximate Curvature (EK-FAC) approximation to scale influence
functions up to LLMs with up to 52 billion parameters. In our experiments, EK-FAC
achieves similar accuracy to traditional influence function estimators despite the IHVP
computation being orders of magnitude faster. We investigate two algorithmic techniques
to reduce the cost of computing gradients of candidate training sequences: TF-IDF filtering
and query batching. We use influence functions to investigate the generalization patterns of
LLMs, including the sparsity of the influence patterns, increasing abstraction with scale,
math and programming abilities, cross-lingual generalization, and role-playing behavior.
Despite many apparently sophisticated forms of generalization, we identify a surprising
limitation: influences decay to near-zero when the order of key phrases is flipped. Overall,
influence functions give us a powerful new tool for studying the generalization properties of
LLMs.
∗. Core Research Contributors (Equal Contributions).
†. University of Toronto and Vector Institute.
‡. Core Infrastructure Contributor.
All authors are at Anthropic. Correspondence to: roger@anthropic.com .arXiv:2308.03296v1 [cs.LG] 7 Aug 2023 |
journal.pone.0262314.pdf | RESEA RCH ARTICL E
Limits todetecting epistasis inthefitness
landscape ofHIV
Avik Biswas ID
1,2,Allan Haldane ID
1,2,Ronald M.Levy ID
1,2,3*
1Department ofPhysics, Temple University ,Philadelph ia,PA,United States ofAmerica, 2Center for
Biophysics andComputationa lBiology ,Temple Univers ity,Philadelph ia,PA,United States ofAmerica,
3Department ofChemistry ,Temple Univers ity,Philadelph ia,PA,United States ofAmerica
*ronlevy@ temple.ed u
Abstract
Therapid evolution ofHIVisconstrained byinteractions between mutations which affect
viralfitness. Inthiswork, weexplore theroleofepistasis indetermining themutational fit-
ness landscape ofHIVformultiple drug target proteins, including Protease, Reverse Tran-
scriptase, andIntegrase. Epistatic interactions between residues modulate themutation
patterns involved indrug resistance, withunambiguous signatures ofepistasis bestseen in
thecomparison ofthePotts model predicted andexperimental HIVsequence “prevalences”
expressed ashigher-order marginals (beyond triplets) ofthesequence probability distribu-
tion.Incontrast, experimental measures offitness such asviralreplicative capacities gener-
allyprobe fitness effects ofpoint mutations inasingle background, providing weak evidence
forepistasis inviralsystems. Thedetectable effects ofepistasis areobscured byhigher evo-
lutionary conservati onatsites. While double mutant cycles inprinciple, provide oneofthe
bestways toprobe epistatic interactions experimentally without reference toaparticular
background, weshow thattheanalysis iscomplicated bythesmall dynamic range ofmea-
surements. Overall, weshow thatglobal pairwise interaction Potts models arenecessary for
predicting themutational landscape ofviralproteins.
Introduction
Amajor challenge inbiological research, clinical medicine, and biotechnology ishow todeci-
pher and exploit theeffects ofmutations [1].Inefforts ranging from theidentification of
genetic variations underlying disease-causing mutations, totheunderstanding ofthegeno-
type-phenotype mapping, todevelopment ofmodified proteins with useful properties, there is
aneed torapidly assess thefunctional effects ofmutations. Experimental techniques toassess
theeffect ofmultiple mutations onphenotype have been effective [2–5], butfunctional assays
totestallpossible combinations arenotpossible duetothevast sizeofthemutational land-
scape. Recent advances inbiotechnology have enabled deep mutational scans [6]and multi-
plexed assays [7]foramore complete description ofthemutational landscapes ofafew
proteins, butremain resource intensive and limited inscalability. The measured phenotypes
depend onthetype ofexperiment and aresusceptible tochanges inexperimental conditions
PLOS ONE
PLOS ONE |https://doi.or g/10.137 1/journal.po ne.02623 14January 18,2022 1/22a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
OPEN ACCESS
Citation: Biswas A,Haldane A,Levy RM(2022)
Limits todetecting epistasis inthefitness
landscape ofHIV.PLoS ONE17(1): e0262314.
https://do i.org/10.1371/j ournal.pone .0262314
Editor: Emilio Gallicchio, Brooklyn College ofthe
CityUniversity ofNewYork, UNITED STATES
Received: October 15,2021
Accepted: December 20,2021
Published: January 18,2022
Peer Review History: PLOS recognize sthe
benefits oftranspar ency inthepeer review
process; therefore, weenable thepublication of
allofthecontent ofpeer review andauthor
response salongside final, published articles. The
editorial history ofthisarticle isavailable here:
https://doi.o rg/10.1371/jo urnal.pone.0 262314
Copyright: ©2022 Biswas etal.Thisisanopen
access article distributed under theterms ofthe
Creative Commons Attribution License, which
permits unrestricte duse,distribu tion,and
reproduction inanymedium, provided theoriginal
author andsource arecredited.
Data Availabilit yStatement: Thecode used for
model inferenc eandassociated utilities ispublicly
available through Haldane etal.,2020 (https://doi.
org/10.1016 /j.cpc.2020.1 07312). HIVprotein
sequences ,including sequence alignments,
consensus sequen ces,etc.areobtained from the |
2308.06259.pdf | Self-Alignment with Instruction Backtranslation
Xian Li Ping Yu Chunting Zhou Timo Schick
Luke Zettlemoyer Omer Levy Jason Weston Mike Lewis
Meta AI
Abstract
We present a scalable method to build a high quality instruction following language
model by automatically labelling human-written text with corresponding instruc-
tions. Our approach, named instruction backtranslation , starts with a language
model finetuned on a small amount of seed data, and a given web corpus. The seed
model is used to construct training examples by generating instruction prompts
for web documents ( self-augmentation ), and then selecting high quality examples
from among these candidates ( self-curation ). This data is then used to finetune
a stronger model. Finetuning LLaMa on two iterations of our approach yields a
model that outperforms all other LLaMa-based models on the Alpaca leaderboard
not relying on distillation data, demonstrating highly effective self-alignment.
1 Introduction
Aligning large language models (LLMs) to perform instruction following typically requires finetuning
on large amounts of human-annotated instructions or preferences [Ouyang et al., 2022, Touvron
et al., 2023, Bai et al., 2022a] or distilling outputs from more powerful models [Wang et al., 2022a,
Honovich et al., 2022, Taori et al., 2023, Chiang et al., 2023, Peng et al., 2023, Xu et al., 2023].
Recent work highlights the importance of human-annotation data quality Zhou et al. [2023], Köpf
et al. [2023]. However, annotating instruction following datasets with such quality is hard to scale.
In this work, we instead leverage large amounts of unlabelled data to create a high quality instruction
tuning dataset by developing an iterative self-training algorithm. The method uses the model itself
to both augment and curate high quality training examples to improve its own performance. Our
approach, named instruction backtranslation , is inspired by the classic backtranslation method from
machine translation, in which human-written target sentences are automatically annotated with
model-generated source sentences in another language [Sennrich et al., 2015].
Our method starts with a seed instruction following model and a web corpus. The model is first used
toself-augment its training set: for each web document, it creates an instruction following training
example by predicting a prompt (instruction) that would be correctly answered by (a portion of) that
document. Directly training on such data (similarly to Köksal et al. [2023]) gives poor results in our
experiments, both because of the mixed quality of human written web text, and noise in the generated
instructions. To remedy this, we show that the same seed model can be used to self-curate the set of
newly created augmentation data by predicting their quality, and can then be self-trained on only the
highest quality (instruction, output) pairs. The procedure is then iterated, using the improved model
to better curate the instruction data, and re-training to produce a better model.
Our resulting model, Humpback , outperforms all other existing non-distilled models on the Alpaca
leaderboard Li et al. [2023]. Overall, instruction backtranslation is a scalable method for enabling
language models to improve their own ability to follow instructions.arXiv:2308.06259v2 [cs.CL] 14 Aug 2023 |
2309.00267.pdf | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI
Feedback
Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard,
Colton Bishop, Victor Carbune, Abhinav Rastogi
Google Research
{harrisonlee,samratph,hassan}@google.com
Abstract
Reinforcement learning from human feedback
(RLHF) is effective at aligning large language
models (LLMs) to human preferences, but gath-
ering high-quality human preference labels is
a key bottleneck. We conduct a head-to-head
comparison of RLHF vs. RL from AI Feed-
back (RLAIF) - a technique where preferences
are labeled by an off-the-shelf LLM in lieu of
humans, and we find that they result in similar
improvements. On the task of summarization,
human evaluators prefer generations from both
RLAIF and RLHF over a baseline supervised
fine-tuned model in ∼70% of cases. Further-
more, when asked to rate RLAIF vs. RLHF
summaries, humans prefer both at equal rates.
These results suggest that RLAIF can yield
human-level performance, offering a potential
solution to the scalability limitations of RLHF.
1 Introduction
Reinforcement Learning from Human Feedback
(RLHF) is an effective technique for aligning lan-
guage models to human preferences (Stiennon
et al., 2020; Ouyang et al., 2022) and is cited as
one of the key drivers of success in modern conver-
sational language models like ChatGPT and Bard
(Liu et al., 2023; Manyika, 2023). By training
with reinforcement learning (RL), language mod-
els can be optimized on complex, sequence-level
objectives that are not easily differentiable with
traditional supervised fine-tuning.
The need for high-quality human labels is an
obstacle for scaling up RLHF, and one natural
question is whether artificially generated labels can
achieve comparable results. Several works have
shown that large language models (LLMs) exhibit
a high degree of alignment with human judgment -
even outperforming humans on some tasks (Gilardi
et al., 2023; Ding et al., 2023). Bai et al. (2022b)
was the first to explore using AI preferences to
train a reward model used for RL fine-tuning - a
Figure 1: Human evaluators strongly prefer RLHF and
RLAIF summaries over the supervised fine-tuned (SFT)
baseline. The differences in win rates between RLAIF vs.
SFT andRLHF vs. SFT are not statistically significant.
Additionally, when compared head-to-head, RLAIF is
equally preferred to RLHF by human evaluators. Error
bars denote 95% confidence intervals.
technique called "Reinforcement Learning from
AI Feedback" (RLAIF)1. While they showed that
utilizing a hybrid of human and AI preferences
in conjunction with the "Constitutional AI" self-
revision technique outperforms a supervised fine-
tuned baseline, their work did not directly compare
the efficacy of human vs. AI feedback, leaving
the question unanswered whether RLAIF can be a
suitable alternative to RLHF.
In this work, we directly compare RLAIF against
RLHF on the task of summarization. Given a text
and two candidate responses, we assign a prefer-
ence label using an off-the-shelf LLM. We then
train a reward model (RM) on the LLM prefer-
ences with a contrastive loss. Finally, we fine-tune
a policy model with reinforcement learning, using
1We use "RLAIF" to denote training a reward model on AI-
labeled preferences followed by conducting RL fine-tuning.
This is distinct from "Constitutional AI", which improves
upon a supervised learning model through iteratively asking an
LLM to generate better responses according to a constitution.
Both were introduced in Bai et al. (2022b) and are sometimes
confused for one another.arXiv:2309.00267v1 [cs.CL] 1 Sep 2023 |
big-book-of-mlops-2nd-edition-v2-102723-final.pdf | 22
NDND EDITIONEDITION
The Big Book
of MLOpseBook
NOW INCLUDING A
SECTION ON LLMOPS
JOSEPH BRADLEY | RAFI KURLANSIK | MATT THOMSON | NIALL TURBITTMODELOPS DATAOPS DEVOPS |
codellama.pdf | Code Llama: Open Foundation Models for Code
Baptiste Rozière†, Jonas Gehring†, Fabian Gloeckle†,∗, Sten Sootla†, Itai Gat, Xiaoqing Ellen
Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov,
Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong,
Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
Thomas Scialom, Gabriel Synnaeve†
Meta AI
Abstract
We release Code Llama , a family of large language models for code based on Llama 2
providing state-of-the-art performance among open models, infilling capabilities, support
for large input contexts, and zero-shot instruction following ability for programming tasks.
We provide multiple flavors to cover a wide range of applications: foundation models
(Code Llama ), Python specializations ( Code Llama - Python ), and instruction-following
models ( Code Llama - Instruct ) with 7B, 13B and 34B parameters each. All models
are trained on sequences of 16k tokens and show improvements on inputs with up to 100k
tokens. 7B and 13B Code Llama andCode Llama - Instruct variants support infilling
based on surrounding content. Code Llama reaches state-of-the-art performance among
open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval
and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B
on HumanEval and MBPP, and all our models outperform every other publicly available
model on MultiPL-E. We release Code Llama under a permissive license that allows for
both research and commercial use.1
1 Introduction
Large language models (LLMs) power a rapidly increasing number of applications, having reached a proficiency
in natural language that allows them to be commanded and prompted to perform a variety of tasks (OpenAI,
2023; Touvron et al., 2023b). By utilizing large, in-domain datasets, their efficacy can be greatly improved
for applications that require a combination of both natural and domain-specific language and understanding
of specialized terminology. By training on domain-specific datasets, they have proved effective more broadly
on applications that require advanced natural language understanding. A prominent use-case is the formal
interaction with computer systems, such as program synthesis from natural language specifications, code
completion, debugging, and generating documentation (for a survey, see Xu & Zhu, 2022, also see Section 5).
In this work, we present Code Llama , a family of LLMs for code generation and infilling derived from
Llama 2 (Touvron et al., 2023b) and released under the same custom permissive license. We provide inference
code for both completion and infilling models in the accompanying repository.1Our approach is based on
gradually specializing and increasing the capabilities of Llama 2 models by applying a cascade of training
and fine-tuning steps (Figure 2):
•Code-training from foundation models. While most LLMs for code generation such as AlphaCode
(Li et al., 2022), InCoder (Fried et al., 2023) or StarCoder (Li et al., 2023) are trained on code only,
Codex (Chen et al., 2021) was fine-tuned from a general language model. We also start from a foundation
model ( Llama 2 , Touvron et al., 2023b) pretrained on general-purpose text and code data. Our comparison
(Section 3.4.1) shows that initializing our model with Llama 2 outperforms the same architecture trained
on code only for a given budget.
1https://github.com/facebookresearch/codellama
†: Core contributors ∗: Meta AI, CERMICS École des Ponts ParisTech
1 |
2310.16764.pdf | 2023-10-26
ConvNets Match Vision Transformers at Scale
Samuel L Smith1, Andrew Brock1, Leonard Berrada1and Soham De1
1Google DeepMind
Many researchers believe that ConvNets perform well on small or moderately sized datasets, but are not
competitive with Vision Transformers when given access to datasets on the web-scale. We challenge this
belief by evaluating a performant ConvNet architecture pre-trained on JFT-4B, a large labelled dataset of
images often used for training foundation models. We consider pre-training compute budgets between
0.4k and 110k TPU-v4 core compute hours, and train a series of networks of increasing depth and width
from the NFNet model family. We observe a log-log scaling law between held out loss and compute
budget. After fine-tuning on ImageNet, NFNets match the reported performance of Vision Transformers
with comparable compute budgets. Our strongest fine-tuned model achieves a Top-1 accuracy of 90.4 %.
Keywords: ConvNets, CNN, Convolution, Transformer, Vision, ViTs, NFNets, JFT, Scaling, Image
Introduction
Convolutional Neural Networks (ConvNets) were
responsible for many of the early successes of
deep learning. Deep ConvNets were first de-
ployed commercially over 20 years ago (Le-
Cun et al., 1998), while the success of AlexNet
on the ImageNet challenge in 2012 re-ignited
widespread interest in the field (Krizhevsky et al.,
2017). For almost a decade ConvNets (typically
ResNets (He et al., 2016a,b)) dominated com-
putervisionbenchmarks. Howeverinrecentyears
they have increasingly been replaced by Vision
Transformers (ViTs) (Dosovitskiy et al., 2020).
Simultaneously, the computer vision commu-
nity has shifted from primarily evaluating the
performance of randomly initialized networks
on specific datasets like ImageNet, to evaluat-
ing the performance of networks pre-trained on
large general purpose datasets collected from the
web. This raises an important question; do Vision
Transformers outperform ConvNet architectures
pre-trained with similar computational budgets?
Although most researchers in the community
believe Vision Transformers show better scaling
properties than ConvNets, there is surprisingly
little evidence to support this claim. Many papers
studying ViTs compare to weak ConvNet base-
lines (typically the original ResNet architecture
(Heetal.,2016a)). Additionally,thestrongestViT
models have been pre-trained using large com-pute budgets beyond 500k TPU-v3 core hours
(Zhai et al., 2022), which significantly exceeds
the compute used to pre-train ConvNets.
WeevaluatethescalingpropertiesoftheNFNet
model family (Brock et al., 2021), a pure con-
volutional architecture published concurrently
with the first ViT papers, and the last ConvNet
to set a new SOTA on ImageNet. We do not
make any changes to the model architecture or
the training procedure (beyond tuning simple
hyper-parameters such as the learning rate or
epoch budget). We consider compute budgets
up to a maximum of 110k TPU-v4 core hours,1
and pre-train on the JFT-4B dataset which con-
tains roughly 4 billion labelled images from 30k
classes (Sun et al., 2017). We observe a log-log
scaling law between validation loss and the com-
pute budget used to pre-train the model. After
fine-tuningonImageNet, ournetworksmatchthe
performance of pre-trained ViTs with comparable
compute budgets (Alabdulmohsin et al., 2023;
Zhai et al., 2022), as shown in Figure 1.
Pre-trained NFNets obey scaling laws
We train a range of NFNet models of varying
depthandwidthonJFT-4B.Eachmodelistrained
for a range of epoch budgets between 0.25 and
8, using a cosine decay learning rate schedule.
1TPU-v4 cores have roughly double the theoretical flops
of TPU-v3 cores, however both cores have similar memory.
Corresponding author(s): slsmith@google.com
©2023 Google DeepMind. All rights reservedarXiv:2310.16764v1 [cs.CV] 25 Oct 2023 |
1912.10702.pdf | The Usual Suspects?
Reassessing Blame for VAE Posterior Collapse
Bin Dai daib13@mails.tsinghua.edu.cn
Institute for Advanced Study
Tsinghua University
Beijing, China
Ziyu Wang wzy196@gmail.com>
Department of Computer Science and Technology
Tsinghua University
Beijing, China
David Wipf davidwipf@gmail.com
Microsoft Research
Beijing, China
Abstract
In narrow asymptotic settings Gaussian VAE models of continuous data have been shown
to possess global optima aligned with ground-truth distributions. Even so, it is well known
that poor solutions whereby the latent posterior collapses to an uninformative prior are
sometimes obtained in practice. However, contrary to conventional wisdom that largely
assigns blame for this phenomena on the undue influence of KL-divergence regularization,
we will argue that posterior collapse is, at least in part, a direct consequence of bad local
minima inherent to the loss surface of deep autoencoder networks. In particular, we prove
that even small nonlinear perturbations of affine VAE decoder models can produce such
minima, and in deeper models, analogous minima can force the VAE to behave like an
aggressive truncation operator, provably discarding information along all latent dimensions
in certain circumstances. Regardless, the underlying message here is not meant to undercut
valuable existing explanations of posterior collapse, but rather, to refine the discussion and
elucidate alternative risk factors that may have been previously underappreciated.
1. Introduction
The variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) repre-
sents a powerful generative model of data points that are assumed to possess some complex
yet unknown latent structure. This assumption is instantiated via the marginalized distri-
bution
pθ(x) =∫
pθ(x|z)p(z)dz, (1)
which forms the basis of prevailing VAE models. Here z∈Rκis a collection of unobservable
latent factors of variation that, when drawn from the prior p(z), are colloquially said to
generate an observed data point x∈Rdthrough the conditional distribution pθ(x|z). The
latter is controlled by parameters θthat can, at least conceptually speaking, be optimized
by maximum likelihood over pθ(x) given available training examples.
1arXiv:1912.10702v1 [cs.LG] 23 Dec 2019 |
10.1038.s41586-024-07177-7.pdf | 880 | Nature | Vol 627 | 28 March 2024
ArticleEvolutionary trajectories of small cell lung
cancer under therapy
Julie George1,2 ✉, Lukas Maas1, Nima Abedpour1,3,4, Maria Cartolano1,5, Laura Kaiser1,
Rieke N. Fischer6, Andreas H. Scheel7, Jan-Philipp Weber6, Martin Hellmich8, Graziella Bosco1,
Caroline Volz3,5, Christian Mueller1,2, Ilona Dahmen1, Felix John6, Cleidson Padua Alves1,
Lisa Werr1, Jens Peter Panse9,10, Martin Kirschner9,10, Walburga Engel-Riedel11,
Jessica Jürgens11, Erich Stoelben12, Michael Brockmann13, Stefan Grau14,15,
Martin Sebastian16,17,18, Jan A. Stratmann16,17, Jens Kern19, Horst-Dieter Hummel20,
Balazs Hegedüs21, Martin Schuler18,22, Till Plönes22,23, Clemens Aigner21,24, Thomas Elter3,
Karin Toepelt3, Yon-Dschun Ko25, Sylke Kurz26, Christian Grohé26, Monika Serke27,
Katja Höpker28, Lars Hagmeyer29, Fabian Doerr21,30, Khosro Hekmath30, Judith Strapatsas31,
Karl-Otto Kambartel32, Geothy Chakupurakal33, Annette Busch34, Franz-Georg Bauernfeind34,
Frank Griesinger35, Anne Luers35, Wiebke Dirks35, Rainer Wiewrodt36, Andrea Luecke36,
Ernst Rodermann37, Andreas Diel37, Volker Hagen38, Kai Severin39, Roland T . Ullrich3,5,
Hans Christian Reinhardt40,41, Alexander Quaas7, Magdalena Bogus42, Cornelius Courts42,
Peter Nürnberg43, Kerstin Becker43, Viktor Achter44, Reinhard Büttner7, Jürgen Wolf6,
Martin Peifer1,5 ✉ & Roman K. Thomas1,7,18 ✉
The evolutionary processes that underlie the marked sensitivity of small cell lung
cancer (SCLC) to chemotherapy and rapid relapse are unknown1–3. Here we determined
tumour phylogenies at diagnosis and throughout chemotherapy and immunotherapy
by multiregion sequencing of 160 tumours from 65 patients. Treatment-naive SCLC
exhibited clonal homogeneity at distinct tumour sites, whereas first-line platinum-
based chemotherapy led to a burst in genomic intratumour heterogeneity and spatial
clonal diversity. We observed branched evolution and a shift to ancestral clones
underlying tumour relapse. Effective radio- or immunotherapy induced a re-expansion
of founder clones with acquired genomic damage from first-line chemotherapy.
Whereas TP53 and RB1 alterations were exclusively part of the common ancestor,
MYC family amplifications were frequently not constituents of the founder clone.
At relapse, emerging subclonal mutations affected key genes associated with SCLC
biology, and tumours harbouring clonal CREBBP/ EP300 alterations underwent
genome duplications. Gene-damaging TP53 alterations and co-alterations of TP53
missense mutations with TP73, CREBBP/ EP300 or FMN2 were significantly associated
with shorter disease relapse following chemotherapy. In summary, we uncover key
processes of the genomic evolution of SCLC under therapy, identify the common
ancestor as the source of clonal diversity at relapse and show central genomic
patterns associated with sensitivity and resistance to chemotherapy.
Small cell lung cancer (SCLC) is one of the deadliest human cancers,
with a 5 year survival rate of less than 7%1–4. The standard of care for
extensive-stage SCLC consists of systemic treatment with platinum
and etoposide, recently combined with programmed death-ligand 1
(PD-L1) immune checkpoint inhibitors (ICIs)2. One peculiarity of SCLC is
its typically high sensitivity to platinum-based chemotherapy followed
by rapid recurrence, which distinguishes it from most other human
cancers. Unfortunately, second-line treatment with other chemothera -
peutics or immunotherapy is only marginally effective and patients
ultimately succumb to their disease1,2, 4.
We and others have previously performed large-scale genome
sequencing to comprehensively characterize cancer genome alterations in SCLC, which showed universal biallelic losses of the tumour sup -
pressors TP53 and RB1, additional alterations to histone-modifying
enzymes and cell cycle regulators, and MYC transcription factor ampli -
fications5–7. Furthermore, SCLC subgroups were defined on the basis of
the expression of neuroendocrine lineage transcription factors, which
impact tumour biology and treatment outcome4,8,9. Finally, preliminary
studies have provided initial clues in regard to molecular pathways
associated with resistance to chemotherapy10,11.
Despite progress in characterization of the molecular basis of SCLC,
the underlying patterns of clonal evolution and the mechanisms caus -
ing drug resistance have remained unclear. We suggest that cancer
genome alterations not only drive malignant transformation in SCLC https://doi.org/10.1038/s41586-024-07177-7
Received: 25 January 2023
Accepted: 7 February 2024
Published online: 13 March 2024
Open access
Check for updates
A list of affiliations appears at the end of the paper. |
2403.03507.pdf | GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Jiawei Zhao1Zhenyu Zhang3Beidi Chen2 4Zhangyang Wang3Anima Anandkumar* 1Yuandong Tian* 2
Abstract
Training Large Language Models (LLMs)
presents significant memory challenges, predom-
inantly due to the growing size of weights and
optimizer states. Common memory-reduction
approaches, such as low-rank adaptation
(LoRA), add a trainable low-rank matrix to
the frozen pre-trained weight in each layer,
reducing trainable parameters and optimizer
states. However, such approaches typically
underperform training with full-rank weights in
both pre-training and fine-tuning stages since
they limit the parameter search to a low-rank
subspace and alter the training dynamics, and
further, may require full-rank warm start. In this
work, we propose Gradient Low-Rank Projec-
tion ( GaLore ), a training strategy that allows
full-parameter learning but is more memory-
efficient than common low-rank adaptation
methods such as LoRA. Our approach reduces
memory usage by up to 65.5% in optimizer
states while maintaining both efficiency and
performance for pre-training on LLaMA 1B
and 7B architectures with C4 dataset with up to
19.7B tokens, and on fine-tuning RoBERTa on
GLUE tasks. Our 8-bit GaLore further reduces
optimizer memory by up to 82.5% and total
training memory by 63.3%, compared to a BF16
baseline. Notably, we demonstrate, for the first
time, the feasibility of pre-training a 7B model
on consumer GPUs with 24GB memory (e.g.,
NVIDIA RTX 4090) without model parallel,
checkpointing, or offloading strategies.
1. Introduction
Large Language Models (LLMs) have shown impressive
performance across multiple disciplines, including conver-
sational AI and language translation. However, pre-training
*Equal advising1California Institute of Technology2Meta AI
3University of Texas at Austin4Carnegie Mellon University. Cor-
respondence to: Jiawei Zhao <jiawei@caltech.edu >, Yuandong
Tian<yuandong@meta.com >.
Preprint. Work in Progress
Memory Cost (GB)BF16
Adafactor
8-bit Adam
8-bit GaLore RTX 4090Figure 1: Memory consumption of pre-training a LLaMA 7B
model with a token batch size of 256 on a single device, without
activation checkpointing and memory offloading. Details refer to
Section 5.5.
Algorithm 1: GaLore, PyTorch-like
for weight in model.parameters():
grad = weight.grad
# original space -> compact space
lorgrad = project (grad)
# update by Adam, Adafactor, etc.
lorupdate = update (lor grad)
# compact space -> original space
update = project back (lor update)
weight.data += update
and fine-tuning LLMs require not only a huge amount of
computation but is also memory intensive. The memory
requirements include not only billions of trainable parame-
ters, but also their gradients and optimizer states (e.g., gra-
dient momentum and variance in Adam) that can be larger
than parameter storage themselves (Raffel et al., 2023;
Touvron et al., 2023; Chowdhery et al., 2022). For exam-
ple, pre-training a LLaMA 7B model from scratch with a
single batch size requires at least 58 GB memory (14GB for
trainable parameters, 42GB for Adam optimizer states and
weight gradients, and 2GB for activations1). This makes
the training not feasible on consumer-level GPUs such as
NVIDIA RTX 4090 with 24GB memory.
In addition to engineering and system efforts, such as gra-
dient checkpointing (Chen et al., 2016), memory offload-
ing (Rajbhandari et al., 2020), etc., to achieve faster and
more efficient distributed training, researchers also seek
to develop various optimization techniques to reduce the
memory usage during pre-training and fine-tuning.
1The calculation is based on LLaMA architecture, BF16 nu-
merical format, and maximum sequence length of 2048.
1arXiv:2403.03507v1 [cs.LG] 6 Mar 2024 |
2404.09932.pdf | Foundational Challenges in Assuring Alignment and
Safety of Large Language Models
Usman Anwar1
Abulhair Saparov∗2, Javier Rando∗3, Daniel Paleka∗3, Miles Turpin∗2, Peter Hase∗4,
Ekdeep Singh Lubana∗5, Erik Jenner∗6, Stephen Casper∗7, Oliver Sourbut∗8,
Benjamin L. Edelman∗9, Zhaowei Zhang∗10, Mario Günther∗11, Anton Korinek∗12,
Jose Hernandez-Orallo∗13
Lewis Hammond8, Eric Bigelow9, Alexander Pan6, Lauro Langosco1, Tomasz Korbak14,
Heidi Zhang15, Ruiqi Zhong6, Seán Ó hÉigeartaigh‡1, Gabriel Recchia16, Giulio Corsi‡1,
Alan Chan‡17, Markus Anderljung‡17, Lilian Edwards‡18
Yoshua Bengio‡19, Danqi Chen‡20, Samuel Albanie‡1, Tegan Maharaj‡21, Jakob Foerster‡8,
Florian Tramer‡3, He He‡2, Atoosa Kasirzadeh‡22, Yejin Choi‡23
David Krueger‡1
∗indicates major contribution.
‡indicates advisory role.
1University of Cambridge2New York University3ETH Zurich4UNC Chapel Hill
5University of Michigian6University of California, Berkeley7Massachusetts Institute of Technology
8University of Oxford9Harvard University10Peking University11LMU Munich
12University of Virginia13Universitat Politècnica de València14University of Sussex
15Stanford University16Modulo Research17Center for the Governance of AI
18Newcastle University19Mila - Quebec AI Institute, Université de Montréal20Princeton University
21University of Toronto22University of Edinburgh23University of Washington, Allen Institute for AI
Abstract
This work identifies 18foundational challenges in assuring the alignment and safety
of large language models (LLMs). These challenges are organized into three different
categories: scientific understanding of LLMs ,development and deployment methods ,
andsociotechnical challenges . Based on the identified challenges, we pose 200 +concrete
research questions.
Corresponding author: Usman Anwar « usmananwar391@gmail.com »
1arXiv:2404.09932v1 [cs.LG] 15 Apr 2024 |
2022.naacl-main.134.pdf | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies , pages 1836 - 1853
July 10-15, 2022 ©2022 Association for Computational Linguistics
Intent Detection and Discovery from User Logs via Deep Semi-Supervised
Contrastive Clustering
Rajat Kumar, Mayur Patidar, Vaibhav Varshney, Lovekesh Vig, Gautam Shroff
TCS Research, New Delhi, India
{k.rajat2, patidar.mayur, varshney.v,
lovekesh.vig, gautam.shroff}@tcs.com
Abstract
Intent Detection is a crucial component of Dia-
logue Systems wherein the objective is to clas-
sify a user utterance into one of the multiple
pre-defined intents. A pre-requisite for devel-
oping an effective intent identifier is a training
dataset labeled with all possible user intents.
However, even skilled domain experts are often
unable to foresee all possible user intents at de-
sign time and for practical applications, novel
intents may have to be inferred incrementally
on-the-fly from user utterances. Therefore, for
any real-world dialogue system, the number
of intents increases over time and new intents
have to be discovered by analyzing the utter-
ances outside the existing set of intents. In this
paper, our objective is to i) detect known intent
utterances from a large number of unlabeled
utterance samples given a few labeled samples
and ii) discover new unknown intents from the
remaining unlabeled samples. Existing SOTA
approaches address this problem via alternate
representation learning and clustering wherein
pseudo labels are used for updating the repre-
sentations and clustering is used for generating
the pseudo labels. Unlike existing approaches
that rely on epoch-wise cluster alignment, we
propose an end-to-end deep contrastive clus-
tering algorithm that jointly updates model pa-
rameters and cluster centers via supervised and
self-supervised learning and optimally utilizes
both labeled and unlabeled data. Our proposed
approach outperforms competitive baselines on
five public datasets for both settings: (i) where
the number of undiscovered intents is known in
advance, and (ii) where the number of intents is
estimated by an algorithm. We also propose a
human-in-the-loop variant of our approach for
practical deployment which does not require
an estimate of new intents and outperforms the
end-to-end approach.
1 Introduction
Modern dialogue systems (Louvan and Magnini,
2020) are increasingly reliant on intent detection
I1: Activate My Card I2: Card LinkingUser -1
U11: How do I link a new card in the app |I2|0.90|Pos
U12: The ATM didnt return my card! |I2|0.10|Pos
U13: The ATM machine ate my card. |I1|0.25|None
U14: What do I do if my card is stolen? |I1|0.9|Neg
U15: Ineed to report a stolen card |I2|0.20|Neg
U16: How is the weather today? |I2|0.10|None
User -2
U21: Where do I go to add a new card?|I1|0.30|None
U22: I want to activate my new card. |I1|0.95|None
U23: The ATM sucked my card in. | I2| 0.20|Neg
U24: My card has gone missing. |I1| 0.15 |None
U25: My card was stolen |I2| 0.85|NoneU12, U13, U14, U15,
U16, U21, U23, U24User utterances for
Human Review
U12 ,U13 ,U14 ,U15
U21, U23 ,U24, U16
I3: Card Lost or
Stolen
I4: Card Swallowed
U16:OOSFigure 1: An instance of user logs, where the intent
detection model is trained on two known intents, i.e.,
i1andi2. After manual analysis of user logs a human
reviewer has discovered two new intents i3andi4and
assigned utterance u21to an existing intent, i.e., i2.
to classify a user utterance into one of the multiple
known user intents. Intent detection is typically
modeled as a multi-class classification problem
where labeled data comprising of utterances for
each known intent is manually created by domain
experts. However, most real-world applications
have to cope with evolving user needs and new
functionality is routinely introduced into the dia-
logue system resulting in a continuously increasing
number of intents over time. Even for seasoned do-
main experts estimating future user requirements at
design time is challenging and these often have to
be discovered from recent user logs which contain
information corresponding to past user utterances,
model response (i.e., predicted intent), implicit
(confidence or softmax probability), and explicit
(user clicks on a thumbs up or thumbs down icon)
feedback as shown in Fig 1. The intent detection
model presented in Fig. 1 is trained on two ini-
tial intents ( I1,I2) from the banking domain using
labeled data created by domain experts. Filtered
user logs containing implicit and explicit feedback
were shared with domain experts, who, discovered
two new intents ( I3,I4) and mapped the filtered
utterances to these new intents. Additionally, ex-
perts also have to identify and discard utterances
that are outside the domain of the dialog system.1836 |
8781_turing_complete_transformers_t.pdf | Under review as a conference paper at ICLR 2023
TURING COMPLETE TRANSFORMERS : T WOTRANS -
FORMERS AREMORE POWERFUL THAN ONE
Anonymous authors
Paper under double-blind review
ABSTRACT
This paper presents Find+Replace transformers, a family of multi-transformer
architectures that can provably do things no single transformer can, and which
outperform GPT-4 on several challenging tasks. We first establish that tra-
ditional transformers and similar architectures are not Turing complete, while
Find+Replace transformers are. Using this fact, we show how arbitrary programs
can be compiled to Find+Replace transformers, aiding interpretability research.
We also demonstrate the superior performance of Find+Replace transformers over
GPT-4 on a set of composition challenge problems (solving problems 20x longer
on tower of Hanoi, 3%→100% on multiplication, 72%→100% on a dynamic
programming problem). This work aims to provide a theoretical basis for multi-
agent architectures, and to encourage their exploration.
1 I NTRODUCTION
The first computers – including the difference engine , differential analyzer, Z1, and ABC (Bab-
bage & Babbage, 1825; Bush, 1931; Rojas, 2014; Atanasoff, 1940) – were not Turing Complete.
Some such machines, like the Hollerith Tabulating Machine (Hollerith, 1889) and the Harvard Mark
I (Comrie, 1946), even achieved considerable real-world use despite that limitation. However, the
advent of Turing Complete computers (Turing et al., 1936; Goldstine & Goldstine, 1946; Kilburn,
1949) fundamentally changed how computers were used and led to the development of more com-
plex, comprehensible, and composable programs (Backus, 1954; Copeland, 2004).
As we will show in this paper, current LLMs based on the transformer architecture (Vaswani et al.,
2017) are not Turing Complete. We present an alternative that is.
The fundamental reason transformers are not Turing complete is that, once the architecture of a
transformer is decided, there is a bounded amount of computation that it can do. This guarantees the
model will fail to generalize beyond input of some length and complexity. Such limitations are not
only theoretical, they are supported by a number of recent results on the ability of language models
to generalize to large context lengths (Del’etang et al., 2022; Liu et al., 2023; Dziri et al., 2023).
Addressing these deficiencies is nontrivial and requires a fundamental shift in approach. We propose
an approach drawing from multi-agent systems (Messing, 2003; Stone & Veloso, 2000), particularly
multi-transformer systems. Such systems have recently garnered interest, being employed to gener-
ate simulacra of human behavior (Park et al., 2023), perform economic simulations (Horton, 2023),
and demonstrate open-ended exploration in games like Minecraft (Wang et al., 2023a).
This paper presents a family of multi-transformer architectures, and provides theoretical and em-
pirical evidence the family can outperform traditional transformers. We hope this study will ignite
further investigations into architectures that are multi-transformer and Turing complete.
Our contributions are as follows:
• We provide a simple proof that current LLMs are not Turing Complete
• We present Find+Replace transformers, a family of provably Turing Complete architectures
• We introduce a method for turning any program into a Find+Replace transformer
• We show that Find+Replace transformers out-perform GPT-4 on a set of challenge tasks
1 |
2023.01.11.523679v3.full.pdf | The Nucleotide Transformer: Building and Evaluating Robust
Foundation Models for Human Genomics
Hugo Dalla-Torre1, Liam Gonzalez1, Javier Mendoza-Revilla1, Nicolas Lopez Carranza1,
Adam Henryk Grzywaczewski2, Francesco Oteri1, Christian Dallago2 3,
Evan Trop1, Bernardo P. de Almeida1, Hassan Sirelkhatim2,
Guillaume Richard1, Marcin Skwark1, Karim Beguir1,
Marie Lopez∗†1, Thomas Pierrot∗†1
1InstaDeep2Nvidia3TUM
Abstract
Closing the gap between measurable genetic information and observable traits is a longstand-
ing challenge in genomics. Yet, the prediction of molecular phenotypes from DNA sequences
alone remains limited and inaccurate, often driven by the scarcity of annotated data and the
inability to transfer learnings between prediction tasks. Here, we present an extensive study
of foundation models pre-trained on DNA sequences, named the Nucleotide Transformer, rang-
ing from 50M up to 2.5B parameters and integrating information from 3,202 diverse human
genomes, as well as 850 genomes selected across diverse phyla, including both model and non-
model organisms. These transformer models yield transferable, context-specific representations
of nucleotide sequences, which allow for accurate molecular phenotype prediction even in low-
data settings. We show that the developed models can be fine-tuned at low cost and despite
low available data regime to solve a variety of genomics applications. Despite no supervision,
the transformer models learned to focus attention on key genomic elements, including those that
regulate gene expression, such as enhancers. Lastly, we demonstrate that utilizing model rep-
resentations can improve the prioritization of functional genetic variants. The training and ap-
plication of foundational models in genomics explored in this study provide a widely applicable
stepping stone to bridge the gap of accurate molecular phenotype prediction from DNA sequence.
Code and weights available at: https://github.com/instadeepai/nucleotide-transformer in Jax and
https://huggingface.co/InstaDeepAI in Pytorch. Example notebooks to apply these models to any
downstream task are available on HuggingFace.
Introduction
Foundation models in artificial intelligence (AI) are characterized by their large-scale nature, incorpo-
rating millions of parameters trained on extensive datasets. These models can be adapted for a wide
range of subsequent predictive tasks and have profoundly transformed the AI field. Notable examples
in natural language processing (NLP) include the so-called language models (LMs) BERT [ 1] and GPT
[2]. LMs have gained significant popularity in recent years owing to their ability to be trained on un-
labeled data, creating general-purpose representations capable of solving downstream tasks. One way
they achieve a comprehensive understanding of language is by solving billions of cloze tests, in which
they predict the correct word to fill in the blank in a given sentence. This approach is known as masked
language modeling [1]. Early instances of foundation models applying this objective to biology involved
training LMs on protein sequences, where they were tasked with predicting masked amino acids in
large protein sequence datasets [3, 4,5]. These protein LMs, when applied to downstream tasks using
transfer learning, demonstrated the ability to compete with and even outperform previous methods
for tasks such as predicting protein structure [3, 4] and function [6, 7], even in data scarce regiments [8].
∗Equal Supervision
†Corresponding authors: t.pierrot@instadeep.com & m.lopez@instadeep.com
1. CC-BY-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 19, 2023. ; https://doi.org/10.1101/2023.01.11.523679doi: bioRxiv preprint |
bengio03a.pdf | Journal of Machine Learning Research 3 (2003) 1137–1155 Submitted 4/02; Published 2/03
ANeural Probabilistic Language Model
YoshuaBengio BENGIOY @IRO.UMONTREAL .CA
Réjean Ducharme DUCHARME @IRO.UMONTREAL .CA
Pascal Vincent VINCENTP @IRO.UMONTREAL .CA
Christian Jauvin JAUVINC@IRO.UMONTREAL .CA
Départementd’InformatiqueetRechercheOpérationnelle
Centre deRechercheMathématiquesUniversité deMontréal,Montréal,Québec,Canada
Editors:Jaz Kandola,ThomasHofmann,TomasoPoggioandJohnShawe-Taylor
Abstract
A goal of statistical language modeling is to learn the joint probability function of sequences of
wordsin a language. Thisis intrinsically difficultbecause of the curse of dimensionality :aw o r d
sequenceonwhichthemodelwillbetestedislikelytobedifferentfromallthewordsequencesseen
duringtraining. Traditionalbutverysuccessfulapproachesbasedonn-gramsobtaingeneralization
byconcatenatingveryshortoverlappingsequencesseeninthetrainingset. We proposetofightthe
curse of dimensionality by learning a distributed representation for words which allows each
training sentence to inform the model about an exponential number of semantically neighboringsentences. The model learns simultaneously (1) a distributed representation for each word along
with (2) the probability function for word sequences, expressed in terms of these representations.
Generalization is obtained because a sequence of words that has never been seen before gets highprobabilityifitismadeofwordsthataresimilar(inthesenseofhavinganearbyrepresentation)to
wordsforminganalreadyseensentence. Trainingsuchlargemodels(withmillionsofparameters)
within a reasonable time is itself a significant challenge. We report on experiments using neuralnetworks for the probability function, showing on two text corpora that the proposed approach
significantlyimprovesonstate-of-the-artn-grammodels,andthattheproposedapproachallowsto
takeadvantageoflongercontexts.
Keywords: Statistical language modeling, artificial neural networks, distributed representation,
curseofdimensionality
1. Introduction
A fundamental problem that makes language modeling and other learning problems difficult is the
curse of dimensionality . It is particularly obvious in the case when one wants to model the joint
distribution between many discrete random variables (such as words in a sentence, or discrete at-tributes in a data-mining task). For example, if one wants to model the joint distribution of 10consecutive words in a natural language with a vocabulary Vof size 100,000, there are potentially
100000
10−1=1050−1 free parameters. When modeling continuous variables, we obtain gen-
eralization more easily (e.g. with smooth classes of functions like multi-layer neural networks or
Gaussian mixture models) because the function to be learned can be expected to have some lo-cal smoothness properties. For discrete spaces, the generalization structure is not as obvious: anychange of these discrete variables may have a drastic impact on the value of the function to be esti-
c⃝2003 Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin. |
2004.05150.pdf | Longformer: The Long-Document Transformer
Iz Beltagy∗Matthew E. Peters∗Arman Cohan∗
Allen Institute for Artificial Intelligence, Seattle, WA, USA
{beltagy,matthewp,armanc }@allenai.org
Abstract
Transformer-based models are unable to pro-
cess long sequences due to their self-attention
operation, which scales quadratically with the
sequence length. To address this limitation,
we introduce the Longformer with an attention
mechanism that scales linearly with sequence
length, making it easy to process documents of
thousands of tokens or longer. Longformer’s
attention mechanism is a drop-in replacement
for the standard self-attention and combines
a local windowed attention with a task moti-
vated global attention. Following prior work
on long-sequence transformers, we evaluate
Longformer on character-level language mod-
eling and achieve state-of-the-art results on
text8 andenwik8 . In contrast to most
prior work, we also pretrain Longformer and
finetune it on a variety of downstream tasks.
Our pretrained Longformer consistently out-
performs RoBERTa on long document tasks
and sets new state-of-the-art results on Wiki-
Hop and TriviaQA. We finally introduce the
Longformer-Encoder-Decoder (LED), a Long-
former variant for supporting long document
generative sequence-to-sequence tasks, and
demonstrate its effectiveness on the arXiv sum-
marization dataset.1
1 Introduction
Transformers (Vaswani et al., 2017) have achieved
state-of-the-art results in a wide range of natu-
ral language tasks including generative language
modeling (Dai et al., 2019; Radford et al., 2019)
and discriminative language understanding (De-
vlin et al., 2019). This success is partly due to
the self-attention component which enables the net-
work to capture contextual information from the
entire sequence. While powerful, the memory and
computational requirements of self-attention grow
∗Equal contribution.
1https://github.com/allenai/longformer
Figure 1: Runtime and memory of full self-
attention and different implementations of Long-
former’s self-attention; Longformer-loop is non-
vectorized, Longformer-chunk is vectorized, and
Longformer-cuda is a custom cuda kernel im-
plementations. Longformer’s memory usage scales
linearly with the sequence length, unlike the full
self-attention mechanism that runs out of memory
for long sequences on current GPUs. Different
implementations vary in speed, with the vectorized
Longformer-chunk being the fastest. More details
are in section 3.2.
quadratically with sequence length, making it infea-
sible (or very expensive) to process long sequences.
To address this limitation, we present Long-
former, a modified Transformer architecture with
a self-attention operation that scales linearly with
the sequence length, making it versatile for pro-
cessing long documents (Fig 1). This is an advan-
tage for natural language tasks such as long docu-
ment classification, question answering (QA), and
coreference resolution, where existing approaches
partition or shorten the long context into smaller
sequences that fall within the typical 512 token
limit of BERT-style pretrained models. Such parti-
tioning could potentially result in loss of important
cross-partition information, and to mitigate this
problem, existing methods often rely on complex
architectures to address such interactions. On the
other hand, our proposed Longformer is able to
build contextual representations of the entire con-
text using multiple layers of attention, reducing thearXiv:2004.05150v2 [cs.CL] 2 Dec 2020 |
2309.17453v4.pdf | Published as a conference paper at ICLR 2024
EFFICIENT STREAMING LANGUAGE MODELS
WITH ATTENTION SINKS
Guangxuan Xiao1∗Yuandong Tian2Beidi Chen3Song Han1,4Mike Lewis2
1Massachusetts Institute of Technology2Meta AI
3Carnegie Mellon University4NVIDIA
https://github.com/mit-han-lab/streaming-llm
ABSTRACT
Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed but
poses two major challenges. Firstly, during the decoding stage, caching previous
tokens’ Key and Value states (KV) consumes extensive memory. Secondly, popular
LLMs cannot generalize to longer texts than the training sequence length. Window
attention, where only the most recent KVs are cached, is a natural approach — but
we show that it fails when the text length surpasses the cache size. We observe
an interesting phenomenon, namely attention sink , that keeping the KV of initial
tokens will largely recover the performance of window attention. In this paper, we
first demonstrate that the emergence of attention sink is due to the strong attention
scores towards initial tokens as a “sink” even if they are not semantically important.
Based on the above analysis, we introduce StreamingLLM, an efficient framework
that enables LLMs trained with a finite length attention window to generalize to
infinite sequence length without any fine-tuning. We show that StreamingLLM can
enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language
modeling with up to 4 million tokens and more. In addition, we discover that
adding a placeholder token as a dedicated attention sink during pre-training can
further improve streaming deployment. In streaming settings, StreamingLLM
outperforms the sliding window recomputation baseline by up to 22.2 ×speedup.
Code and datasets are provided in the link.
1 I NTRODUCTION
Large Language Models (LLMs) (Radford et al., 2018; Brown et al., 2020; Zhang et al., 2022;
OpenAI, 2023; Touvron et al., 2023a;b) are becoming ubiquitous, powering many natural language
processing applications such as dialog systems (Schulman et al., 2022; Taori et al., 2023; Chiang et al.,
2023), document summarization (Goyal & Durrett, 2020; Zhang et al., 2023a), code completion (Chen
et al., 2021; Rozière et al., 2023) and question answering (Kamalloo et al., 2023). To unleash the
full potential of pretrained LLMs, they should be able to efficiently and accurately perform long
sequence generation. For example, an ideal ChatBot assistant can stably work over the content of
recent day-long conversations. However, it is very challenging for LLM to generalize to longer
sequence lengths than they have been pretrained on, e.g., 4K for Llama-2 Touvron et al. (2023b).
The reason is that LLMs are constrained by the attention window during pre-training. Despite
substantial efforts to expand this window size (Chen et al., 2023; kaiokendev, 2023; Peng et al., 2023)
and improve training (Dao et al., 2022; Dao, 2023) and inference (Pope et al., 2022; Xiao et al., 2023;
Anagnostidis et al., 2023; Wang et al., 2021; Zhang et al., 2023b) efficiency for lengthy inputs, the
acceptable sequence length remains intrinsically finite , which doesn’t allow persistent deployments.
In this paper, we first introduce the concept of LLM streaming applications and ask the question:
Can we deploy an LLM for infinite-length inputs without sacrificing efficiency and performance?
∗Part of the work done during an internship at Meta AI.
1arXiv:2309.17453v4 [cs.CL] 7 Apr 2024 |
10.1101.2022.11.18.517004.pdf | High-resolution image reconstruction with latent diffusion models from human
brain activity
Yu Takagi1,2* Shinji Nishimoto1,2
1Graduate School of Frontier Biosciences, Osaka University, Japan
2CiNet, NICT, Japan
{takagi.yuu.fbs,nishimoto.shinji.fbs} @osaka-u.ac.jp
Figure 1. Presented images (red box, top row) and images reconstructed from fMRI signals (gray box, bottom row) for one subject (subj01).
Abstract
Reconstructing visual experiences from human brain ac-
tivity offers a unique way to understand how the brain rep-
resents the world, and to interpret the connection between
computer vision models and our visual system. While deep
generative models have recently been employed for thistask, reconstructing realistic images with high semantic fi-delity is still a challenging problem. Here, we propose a
new method based on a diffusion model (DM) to recon-
struct images from human brain activity obtained via func-
tional magnetic resonance imaging (fMRI). More specifi-cally, we rely on a latent diffusion model (LDM) termedStable Diffusion. This model reduces the computationalcost of DMs, while preserving their high generative perfor-
mance. We also characterize the inner mechanisms of the
LDM by studying how its different components (such as thelatent vector of image Z, conditioning inputs C, and differ-ent elements of the denoising U-Net) relate to distinct brainfunctions. We show that our proposed method can recon-
struct high-resolution images with high fidelity in straight-
* Corresponding authorforward fashion, without the need for any additional train-
ing and fine-tuning of complex deep-learning models. Wealso provide a quantitative interpretation of different LDM
components from a neuroscientific perspective. Overall, our
study proposes a promising method for reconstructing im-ages from human brain activity, and provides a new frame-
work for understanding DMs. Please check out our web-
page at https://sites.google.com/view/stablediffusion-with-
brain/.
1. Introduction
A fundamental goal of computer vision is to construct
artificial systems that see and recognize the world as hu-man visual systems do. Recent developments in the mea-
surement of population brain activity, combined with ad-
vances in the implementation and design of deep neu-ral network models, have allowed direct comparisons be-
tween latent representations in biological brains and ar-
chitectural characteristics of artificial networks, providing
important insights into how these systems operate [3, 8–
10,13,18,19,21,42,43,54,55]. These efforts have in-
1. CC-BY 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted March 11, 2023. ; https://doi.org/10.1101/2022.11.18.517004doi: bioRxiv preprint |
2309.05444.pdf | Pushing Mixture of Experts to the Limit:
Extremely Parameter Efficient MoE for
Instruction Tuning
Ted Zadouri
Cohere for AI
ted@cohere.comAhmet Üstün
Cohere for AI
ahmet@cohere.comArash Ahmadian†
Cohere for AI
arash@cohere.com
Beyza Ermiş
Cohere For AI
beyza@cohere.comAcyr Locatelli
Cohere
acyr@cohere.comSara Hooker
Cohere for AI
sarahooker@cohere.com
Abstract
The Mixture of Experts (MoE) is a widely known neural architecture where an ensemble of specialized
sub-models optimizes overall performance with a constant computational cost. However, conventional
MoEs pose challenges at scale due to the need to store all experts in memory. In this paper, we
push MoE to the limit. We propose extremely parameter-efficient MoE by uniquely combining MoE
architecture with lightweight experts.Our MoE architecture outperforms standard parameter-efficient
fine-tuning (PEFT) methods and is on par with full fine-tuning by only updating the lightweight
experts – less than 1% of an 11B parameters model. Furthermore, our method generalizes to unseen
tasks as it does not depend on any prior task knowledge. Our research underscores the versatility
of the mixture of experts architecture, showcasing its ability to deliver robust performance even
when subjected to rigorous parameter constraints. Our code used in all the experiments is publicly
available here: https://github.com/for-ai/parameter-efficient-moe .
1 Introduction
A conventional training paradigm is to apply the weights of a model to each input. Arguably, this is
not efficient since a given input may not need all of a model’s capacity. In contrast, MoEs build
on the premise that sub-modular components – so called experts – can specialize to different types
of inputs. This emphasis on conditional computation has important efficiency side-effects such as
constant inference cost. This has made MoEs an area of significant research and widespread adoption
in the era of large-scale Transformers where scaling has increased deployment and latency costs
(Shazeer et al., 2018; Riquelme et al., 2021; Du et al., 2022; Fedus et al., 2022).
While the majority of work to-date has focused on MoEs as a pretraining strategy,the inherent
motivation of MoEs is not confined solely to pretraining. In fact, the merits of MoEs are arguably
well suited to an instruction fine-tuning setting where the data is often deliberately structured to
†Also affiliated with the University of Toronto & the Vector Institute for Artificial Intelligence.
Released as a preprint on September 12, 2023 1arXiv:2309.05444v1 [cs.CL] 11 Sep 2023 |
1506.05254.pdf | Gradient Estimation Using
Stochastic Computation Graphs
John Schulman1,2
joschu@eecs.berkeley.eduNicolas Heess1
heess@google.com
Theophane Weber1
theophane@google.comPieter Abbeel2
pabbeel@eecs.berkeley.edu
1Google DeepMind2University of California, Berkeley, EECS Department
Abstract
In a variety of problems originating in supervised, unsupervised, and reinforce-
ment learning, the loss function is defined by an expectation over a collection
of random variables, which might be part of a probabilistic model or the exter-
nal world. Estimating the gradient of this loss function, using samples, lies at
the core of gradient-based learning algorithms for these problems. We introduce
the formalism of stochastic computation graphs —directed acyclic graphs that in-
clude both deterministic functions and conditional probability distributions—and
describe how to easily and automatically derive an unbiased estimator of the loss
function’s gradient. The resulting algorithm for computing the gradient estimator
is a simple modification of the standard backpropagation algorithm. The generic
scheme we propose unifies estimators derived in variety of prior work, along with
variance-reduction techniques therein. It could assist researchers in developing in-
tricate models involving a combination of stochastic and deterministic operations,
enabling, for example, attention, memory, and control actions.
1 Introduction
The great success of neural networks is due in part to the simplicity of the backpropagation al-
gorithm, which allows one to efficiently compute the gradient of any loss function defined as a
composition of differentiable functions. This simplicity has allowed researchers to search in the
space of architectures for those that are both highly expressive and conducive to optimization; yield-
ing, for example, convolutional neural networks in vision [12] and LSTMs for sequence data [9].
However, the backpropagation algorithm is only sufficient when the loss function is a deterministic,
differentiable function of the parameter vector.
A rich class of problems arising throughout machine learning requires optimizing loss functions
that involve an expectation over random variables. Two broad categories of these problems are (1)
likelihood maximization in probabilistic models with latent variables [17, 18], and (2) policy gradi-
ents in reinforcement learning [5, 23, 26]. Combining ideas from from those two perennial topics,
recent models of attention [15] and memory [29] have used networks that involve a combination of
stochastic and deterministic operations.
In most of these problems, from probabilistic modeling to reinforcement learning, the loss functions
and their gradients are intractable, as they involve either a sum over an exponential number of latent
variable configurations, or high-dimensional integrals that have no analytic solution. Prior work (see
Section 6) has provided problem-specific derivations of Monte-Carlo gradient estimators, however,
to our knowledge, no previous work addresses the general case.
Appendix C recalls several classic and recent techniques in variational inference [14, 10, 21] and re-
inforcement learning [23, 25, 15], where the loss functions can be straightforwardly described using
1arXiv:1506.05254v3 [cs.LG] 5 Jan 2016 |
10.1038.s41586-023-06832-9.pdf | 832 | Nature | Vol 625 | 25 January 2024
ArticlePredicting multiple conformations via
sequence clustering and AlphaFold2
Hannah K. Wayment-Steele1,7, Adedolapo Ojoawo1,7, Renee Otten1,5, Julia M. Apitz1,
Warintra Pitsawong1,6, Marc Hömberger1,5, Sergey Ovchinnikov2, Lucy Colwell3,4 &
Dorothee Kern1 ✉
AlphaFold2 (ref. 1) has revolutionized structural biology by accurately predicting
single structures of proteins. However, a protein’s biological function often depends
on multiple conformational substates2, and disease-causing point mutations often
cause population changes within these substates3,4. We demonstrate that clustering a
multiple-sequence alignment by sequence similarity enables AlphaFold2 to sample alternative states of known metamorphic proteins with high confidence. Using
this method, named AF-Cluster, we investigated the evolutionary distribution of
predicted structures for the metamorphic protein KaiB
5 and found that predictions
of both conformations were distributed in clusters across the KaiB family. We used
nuclear magnetic resonance spectroscopy to confirm an AF-Cluster prediction: a
cyanobacteria KaiB variant is stabilized in the opposite state compared with the
more widely studied variant. To test AF-Cluster’s sensitivity to point mutations, we designed and experimentally verified a set of three mutations predicted to flip KaiB
from Rhodobacter sphaeroides from the ground to the fold-switched state. Finally,
screening for alternative states in protein families without known fold switching identified a putative alternative state for the oxidoreductase Mpt53 in Mycobacterium
tuberculosis. Further development of such bioinformatic methods in tandem with experiments will probably have a considerable impact on predicting protein energy
landscapes, essential for illuminating biological function.
Understanding the mechanistic basis of any protein’s functions
requires understanding the complete set of conformational substates
that it can adopt2. For any protein-structure prediction method, the
task of predicting ensembles can be considered in two parts: an ideal
method would (1) generate conformations encompassing the com -
plete landscape and (2) score these conformations in accordance with
the underlying Boltzmann distribution. AlphaFold2 (AF2) achieved
breakthrough performance in the CASP14 competition6 in part by
advancing the state of the art for inferring patterns of interactions
between related sequences in a multiple-sequence alignment (MSA),
building on a long history of methods for inferring these patterns7–10,
often called evolutionary couplings. The premise of methods to infer
structure based on evolutionary couplings is that, because amino acids
exist and evolve in the context of 3D structure, they are not free to
evolve independently, but instead co-evolve in patterns reflective of the
underlying structure. However, proteins must evolve in the context of
the multiple conformational states that they adopt. The high accuracy
of AF2 (ref. 1 ) at single-structure prediction has garnered interest in
its ability to predict multiple conformations of proteins, yet AF2 has
been demonstrated to fail in predicting multiple structures of meta -
morphic proteins11, proteins with apo/holo conformational changes12
and other multi-state proteins13 using its default settings. Despite these demonstrations of shortcomings, it was shown that subsampling the
input MSA enables AF2 to predict known conformational changes of
transporters14.
Success of the MSA subsampling approach in a given system implies
that when calculating evolutionary couplings with a complete MSA,
evolutionary couplings for multiple states are already sufficiently
present such that when introducing noise to obscure subsets of these
contacts, there are still sufficiently complete sets of contacts cor -
responding to one or the other state. Indeed, methods for inferring
evolutionary couplings have already demonstrated that contacts corre -
sponding to multiple states can be observed at the level of entire MSAs
for membrane proteins15, ligand-induced conformational changes16
and multimerization-induced conformational changes17. Methods
proposed to deconvolve sets of states when previous knowledge about
one or more states is known include ablating residues corresponding to
contacts of a known dominant state18 and supplementing the original
MSA with proteins that are known to occupy a rarer state19. However,
there is a need for methods that deconvolve signal from multiple states
if they are not already both present at the level of the entire MSA. For
example, simply subdividing a MSA and making predictions for por -
tions of the MSA has also been used to detect variations in evolutionary
couplings within a protein family17,20.https://doi.org/10.1038/s41586-023-06832-9
Received: 7 July 2023
Accepted: 3 November 2023
Published online: 13 November 2023
Open access
Check for updates
1Department of Biochemistry, Brandeis University and Howard Hughes Medical Institute, Waltham, MA, USA. 2Center for Systems Biology, Harvard University, Cambridge, MA, USA. 3Google
Research, Cambridge, MA, USA. 4Cambridge University, Cambridge, UK. 5Present address: Treeline Biosciences, Watertown, MA, USA. 6Present address: Biomolecular Discovery, Relay
Therapeutics, Cambridge, MA, USA. 7These authors contributed equally: Hannah K. Wayment-Steele, Adedolapo Ojoawo. ✉e-mail: dkern@brandeis.edu |
GPSA-Supplementary-Information.pdf | Supplementary Information for:
Generative Capacity of Probabilistic Protein Sequence Models
Francisco McGee Sandro Hauri Quentin Novinger Slobodan Vucetic Ronald M. Levy
Vincenzo Carnevale Allan Haldane
Supplementary Note 1 - sVAE implementation
The standard variational autoencoder (sVAE) is a deep, symmetrical, and undercomplete autoencoder neural
network composed of a separate encoder qφ(Z|S)and decoder pθ(S|Z)1, which map input sequences Sinto regions
of a low-dimensional latent space Zand back (see Fig. S1). It is a probabilistic model, and in our “vanilla”2
implementation we assume sequences will be distributed according to a unit normal distribution in latent space,
p(Z) =N[0,1](Z)3. Training of a VAE can be understood as maximization of (the logarithm of) the dataset likelihood
L=∏Spθ(S) =∑S∫pθ(S|Z)p(Z)dZwith the addition of a Kullback-Leibler regularization term DKL[qφ(Z|S),pθ(Z|S)],
where pθ(Z|S)is the posterior of the decoder, which allows use of the fitted encoder qφ(Z|S)to perform efficient
estimation of the likelihood and its gradient by Monte-Carlo sampling, for appropriate encoder models. The sVAE
architecture is built on the same basic VAE architecture of “EVOVAE”4, which itself appears to be built on the VAE
implementation provided by developers for the Keras library5, and this same VAE architecture is used for each
protein presented in this work.
Similarly to EVOVAE, sVAE’s hyperparameters were tuned using grid search. sVAE is composed of 3symmetrical
ELU-activated layers in both the encoder and decoder, each layer with 250dense (fully-connected) nodes. The
encoder and decoder are connected by a latent layer of lnodes, and we use l=7in the main text. We provide
further justification for the selection of l=7elsewhere in the Supplementary Note 3. sVAE’s input layer accepts
one-hot encoded sequences, the output layer is sigmoid-activated, and its node output values can be interpreted as
a Bernoulli distribution of the same dimensions as a one-hot encoded sequence. The first layer of the encoder and
the middle layer of the decoder have dropout regularization applied with 30% dropout rate, and the middle layer of
the encoder uses batch normalization4,6,7.
In all inferences, we hold out 10% of the training sequences as a validation dataset, and perform maximum
likelihood optimization using the Keras Adam stochastic gradient optimizer on the remaining 90%8, using mini-batch
gradient descent with a batch size of 200. After each training epoch we evaluate the loss function for the training
and validation data subsets separately. We have tested using early-stopping regularization to stop inference once
the validation loss has not decreased for three epochs in a row, as in previous implementations, but this led to some
variability in the model depending on when the early stopping criterion was reached. To avoid this variability, and to
make different models more directly comparable, we instead fixed the number of epochs to 32for all models, since
in the early stopping tests this led to near minimum training loss and validation loss, and did not lead to significant
overfitting as would be apparent from an increase in the validation loss.
sVAE was implemented using Keras, building on previous implementations4,5, however with a modification of
the loss function relative to both of these, to remove a scaling factor of Lqon the reconstruction loss, which is
sometimes used to avoid issues with local minima as described further below. This prefactor leads to a non-unit
variance of the latent space distribution of the dataset sequences, violating our definition that the latent space
distribution should be normal with unit variance, p(Z) =N[0,1](Z). In the next section we show that after removing
the prefactor the latent space distribution is approximately a unit normal, which more closely follows the original
VAE conception3,9. Our implementation is available at https://github.com/ahaldane/MSA_VAE .
To generate a sequence from the model we generate a random sample in latent space from the latent distribution
N[0,1], and pass this value to the decoder to obtain a Bernoulli distribution, from which we sample once.
1/20 |
2402.04236.pdf | CogCoM: Train Large Vision-Language Models Diving into Details through
Chain of Manipulations
Ji Qi1‡Ming Ding2Weihan Wang1‡Yushi Bai1‡Qingsong Lv2Wenyi Hong1‡
Bin Xu1Lei Hou1Juanzi Li1Yuxiao Dong1Jie Tang1
qj20@mails.tsinghua.edu.cn, ming.ding@zhipuai.cn
Abstract
Vision-Language Models (VLMs) have demon-
strated their widespread viability thanks to exten-
sive training in aligning visual instructions to an-
swers. However, this conclusive alignment leads
models to ignore critical visual reasoning, and fur-
ther result in failures on meticulous visual prob-
lems and unfaithful responses. In this paper, we
propose Chain of Manipulations , a mechanism
that enables VLMs to solve problems with a se-
ries of manipulations, where each manipulation
refers to an operation on the visual input, either
from intrinsic abilities ( e.g., grounding ) acquired
through prior training or from imitating human-
like behaviors ( e.g., zoom in ). This mechanism
encourages VLMs to generate faithful responses
with evidential visual reasoning, and permits users
to trace error causes in the interpretable paths. We
thus train CogCoM , a general 17B VLM with a
memory-based compatible architecture endowed
this reasoning mechanism. Experiments show
that our model achieves the state-of-the-art per-
formance across 8 benchmarks from 3 categories,
and a limited number of training steps with the
data swiftly gains a competitive performance. The
code and data are publicly available at this url.
1. Introduction
Benefiting from the advantage of Large Language Models
(LLMs) in broad world knowledge, large Vision Language
Models (VLMs) (Alayrac et al., 2022; Wang et al., 2023b)
that are further trained to understand vision have demon-
strated viabilities on broad scenarios, such as visual question
answering (Liu et al., 2023b), visual grounding (Peng et al.,
2023), optical character recognition (Zhang et al., 2023b).
1Tsinghua University2Zhipu AI‡Done as intern at Zhipu AI.
Correspondence to: Bin Xu <xubin@tsinghua.edu.cn >, Jie Tang
<jietang@tsinghua.edu.cn >.
CogCOM
VLM
What is written on the pillar in front of the man in black top?Grounding(the man in black top) Grounding(pillar near the man at )CropZoomIn( , 4 times)Focus on to answer the questionFocus on to answer the questionThe letters written on the pillar are QUICK DEPOSIT.NO SMOKING
Q:A:
<latexit sha1_base64="k/+edjw63BcBkak5pVpf4fe8LfI=">AAACxnicjVHLToNAFD3FV62vqks3xMbEFQFLkO4a3XRZo30ktWmATispBQKDpmlM/AG3+mnGP9C/8M5IE100OgQ4c+49Z+be68aBn3Jdfy8oK6tr6xvFzdLW9s7uXnn/oJ1GWeKxlhcFUdJ1nZQFfsha3OcB68YJc6ZuwDru5FLEO/csSf0ovOGzmPWnzjj0R77ncKKu3YExKFd0zazalmmrumZVddOqEbCNmgCGpstVQb6aUfkNtxgigocMUzCE4IQDOEjp6cGAjpi4PubEJYR8GWd4RIm0GWUxynCIndB3TLtezoa0F56pVHt0SkBvQkoVJ6SJKC8hLE5TZTyTzoJd5j2XnuJuM/q7udeUWI47Yv/SLTL/qxO1cIxgyxp8qimWjKjOy10y2RVxc/VHVZwcYuIEHlI8IexJ5aLPqtSksnbRW0fGP2SmYMXey3MzfIpb0oAXU1SXg/aZZliacWVW6hf5qIs4wjFOaZ7nqKOBJlrkPcYzXvCqNJRQyZSH71SlkGsO8WspT1+9YJBz</latexit>b1
<latexit sha1_base64="k/+edjw63BcBkak5pVpf4fe8LfI=">AAACxnicjVHLToNAFD3FV62vqks3xMbEFQFLkO4a3XRZo30ktWmATispBQKDpmlM/AG3+mnGP9C/8M5IE100OgQ4c+49Z+be68aBn3Jdfy8oK6tr6xvFzdLW9s7uXnn/oJ1GWeKxlhcFUdJ1nZQFfsha3OcB68YJc6ZuwDru5FLEO/csSf0ovOGzmPWnzjj0R77ncKKu3YExKFd0zazalmmrumZVddOqEbCNmgCGpstVQb6aUfkNtxgigocMUzCE4IQDOEjp6cGAjpi4PubEJYR8GWd4RIm0GWUxynCIndB3TLtezoa0F56pVHt0SkBvQkoVJ6SJKC8hLE5TZTyTzoJd5j2XnuJuM/q7udeUWI47Yv/SLTL/qxO1cIxgyxp8qimWjKjOy10y2RVxc/VHVZwcYuIEHlI8IexJ5aLPqtSksnbRW0fGP2SmYMXey3MzfIpb0oAXU1SXg/aZZliacWVW6hf5qIs4wjFOaZ7nqKOBJlrkPcYzXvCqNJRQyZSH71SlkGsO8WspT1+9YJBz</latexit>b1<latexit sha1_base64="k/+edjw63BcBkak5pVpf4fe8LfI=">AAACxnicjVHLToNAFD3FV62vqks3xMbEFQFLkO4a3XRZo30ktWmATispBQKDpmlM/AG3+mnGP9C/8M5IE100OgQ4c+49Z+be68aBn3Jdfy8oK6tr6xvFzdLW9s7uXnn/oJ1GWeKxlhcFUdJ1nZQFfsha3OcB68YJc6ZuwDru5FLEO/csSf0ovOGzmPWnzjj0R77ncKKu3YExKFd0zazalmmrumZVddOqEbCNmgCGpstVQb6aUfkNtxgigocMUzCE4IQDOEjp6cGAjpi4PubEJYR8GWd4RIm0GWUxynCIndB3TLtezoa0F56pVHt0SkBvQkoVJ6SJKC8hLE5TZTyTzoJd5j2XnuJuM/q7udeUWI47Yv/SLTL/qxO1cIxgyxp8qimWjKjOy10y2RVxc/VHVZwcYuIEHlI8IexJ5aLPqtSksnbRW0fGP2SmYMXey3MzfIpb0oAXU1SXg/aZZliacWVW6hf5qIs4wjFOaZ7nqKOBJlrkPcYzXvCqNJRQyZSH71SlkGsO8WspT1+9YJBz</latexit>b1<latexit sha1_base64="YsAbQHKXhdd0WG03Y0D8n2fbcl8=">AAACxnicjVHLTsJAFD3UF+ILdemmkZi4alokFXZENywxyiNBQtphwIbSNu1UQ4iJP+BWP834B/oX3hlLogui07Q9c+49Z+be60a+lwjTfM9pK6tr6xv5zcLW9s7uXnH/oJ2Eacx4i4V+GHddJ+G+F/CW8ITPu1HMnanr8447uZTxzj2PEy8MbsQs4v2pMw68kcccQdS1OygPiiXTKFds27J1ArZpViSonpm12pluGaZaJWSrGRbfcIshQjCkmIIjgCDsw0FCTw8WTETE9TEnLibkqTjHIwqkTSmLU4ZD7IS+Y9r1MjagvfRMlJrRKT69MSl1nJAmpLyYsDxNV/FUOUt2mfdcecq7zejvZl5TYgXuiP1Lt8j8r07WIjBCVdXgUU2RYmR1LHNJVVfkzfUfVQlyiIiTeEjxmDBTykWfdaVJVO2yt46Kf6hMyco9y3JTfMpb0oAXU9SXg3bZsGzDuqqU6hfZqPM4wjFOaZ7nqKOBJlrkPcYzXvCqNbRAS7WH71Qtl2kO8WtpT1+KqpBd</latexit>b2
<latexit sha1_base64="YsAbQHKXhdd0WG03Y0D8n2fbcl8=">AAACxnicjVHLTsJAFD3UF+ILdemmkZi4alokFXZENywxyiNBQtphwIbSNu1UQ4iJP+BWP834B/oX3hlLogui07Q9c+49Z+be60a+lwjTfM9pK6tr6xv5zcLW9s7uXnH/oJ2Eacx4i4V+GHddJ+G+F/CW8ITPu1HMnanr8447uZTxzj2PEy8MbsQs4v2pMw68kcccQdS1OygPiiXTKFds27J1ArZpViSonpm12pluGaZaJWSrGRbfcIshQjCkmIIjgCDsw0FCTw8WTETE9TEnLibkqTjHIwqkTSmLU4ZD7IS+Y9r1MjagvfRMlJrRKT69MSl1nJAmpLyYsDxNV/FUOUt2mfdcecq7zejvZl5TYgXuiP1Lt8j8r07WIjBCVdXgUU2RYmR1LHNJVVfkzfUfVQlyiIiTeEjxmDBTykWfdaVJVO2yt46Kf6hMyco9y3JTfMpb0oAXU9SXg3bZsGzDuqqU6hfZqPM4wjFOaZ7nqKOBJlrkPcYzXvCqNbRAS7WH71Qtl2kO8WtpT1+KqpBd</latexit>b2<latexit sha1_base64="YsAbQHKXhdd0WG03Y0D8n2fbcl8=">AAACxnicjVHLTsJAFD3UF+ILdemmkZi4alokFXZENywxyiNBQtphwIbSNu1UQ4iJP+BWP834B/oX3hlLogui07Q9c+49Z+be60a+lwjTfM9pK6tr6xv5zcLW9s7uXnH/oJ2Eacx4i4V+GHddJ+G+F/CW8ITPu1HMnanr8447uZTxzj2PEy8MbsQs4v2pMw68kcccQdS1OygPiiXTKFds27J1ArZpViSonpm12pluGaZaJWSrGRbfcIshQjCkmIIjgCDsw0FCTw8WTETE9TEnLibkqTjHIwqkTSmLU4ZD7IS+Y9r1MjagvfRMlJrRKT69MSl1nJAmpLyYsDxNV/FUOUt2mfdcecq7zejvZl5TYgXuiP1Lt8j8r07WIjBCVdXgUU2RYmR1LHNJVVfkzfUfVQlyiIiTeEjxmDBTykWfdaVJVO2yt46Kf6hMyco9y3JTfMpb0oAXU9SXg3bZsGzDuqqU6hfZqPM4wjFOaZ7nqKOBJlrkPcYzXvCqNbRAS7WH71Qtl2kO8WtpT1+KqpBd</latexit>b2<latexit sha1_base64="C08P0sL86c4e7PRYQrhFA2bf210=">AAACxnicjVHLTsJAFD3UF+ILdemmkZi4aqaKPHZEN7jDKI8ECWnLgA2lbdqphhATf8CtfprxD/QvvDOWRBdEp2l759xzzsy91w49NxaMvWe0peWV1bXsem5jc2t7J7+714qDJHJ40wm8IOrYVsw91+dN4QqPd8KIWxPb4217fCHz7XsexW7g34hpyHsTa+S7Q9exBEHXl32zny8wo1Q0WfFUZ0axUqmeyaDKWLnCdNNgahWQrkaQf8MtBgjgIMEEHD4ExR4sxPR0YYIhJKyHGWERRa7KczwiR9qEWJwYFqFj+o5o101Rn/bSM1Zqh07x6I1IqeOINAHxIorlabrKJ8pZoou8Z8pT3m1Kfzv1mhAqcEfoX7o58786WYvAEBVVg0s1hQqR1TmpS6K6Im+u/6hKkENImIwHlI8odpRy3mddaWJVu+ytpfIfiilRuXdSboJPeUsa8HyK+uKgdWKYJcO8KhZq5+moszjAIY5pnmXUUEcDTfIe4RkveNXqmq8l2sM3Vcukmn38WtrTF1YekEc=</latexit>I1
<latexit sha1_base64="C08P0sL86c4e7PRYQrhFA2bf210=">AAACxnicjVHLTsJAFD3UF+ILdemmkZi4aqaKPHZEN7jDKI8ECWnLgA2lbdqphhATf8CtfprxD/QvvDOWRBdEp2l759xzzsy91w49NxaMvWe0peWV1bXsem5jc2t7J7+714qDJHJ40wm8IOrYVsw91+dN4QqPd8KIWxPb4217fCHz7XsexW7g34hpyHsTa+S7Q9exBEHXl32zny8wo1Q0WfFUZ0axUqmeyaDKWLnCdNNgahWQrkaQf8MtBgjgIMEEHD4ExR4sxPR0YYIhJKyHGWERRa7KczwiR9qEWJwYFqFj+o5o101Rn/bSM1Zqh07x6I1IqeOINAHxIorlabrKJ8pZoou8Z8pT3m1Kfzv1mhAqcEfoX7o58786WYvAEBVVg0s1hQqR1TmpS6K6Im+u/6hKkENImIwHlI8odpRy3mddaWJVu+ytpfIfiilRuXdSboJPeUsa8HyK+uKgdWKYJcO8KhZq5+moszjAIY5pnmXUUEcDTfIe4RkveNXqmq8l2sM3Vcukmn38WtrTF1YekEc=</latexit>I1<latexit sha1_base64="C08P0sL86c4e7PRYQrhFA2bf210=">AAACxnicjVHLTsJAFD3UF+ILdemmkZi4aqaKPHZEN7jDKI8ECWnLgA2lbdqphhATf8CtfprxD/QvvDOWRBdEp2l759xzzsy91w49NxaMvWe0peWV1bXsem5jc2t7J7+714qDJHJ40wm8IOrYVsw91+dN4QqPd8KIWxPb4217fCHz7XsexW7g34hpyHsTa+S7Q9exBEHXl32zny8wo1Q0WfFUZ0axUqmeyaDKWLnCdNNgahWQrkaQf8MtBgjgIMEEHD4ExR4sxPR0YYIhJKyHGWERRa7KczwiR9qEWJwYFqFj+o5o101Rn/bSM1Zqh07x6I1IqeOINAHxIorlabrKJ8pZoou8Z8pT3m1Kfzv1mhAqcEfoX7o58786WYvAEBVVg0s1hQqR1TmpS6K6Im+u/6hKkENImIwHlI8odpRy3mddaWJVu+ytpfIfiilRuXdSboJPeUsa8HyK+uKgdWKYJcO8KhZq5+moszjAIY5pnmXUUEcDTfIe4RkveNXqmq8l2sM3Vcukmn38WtrTF1YekEc=</latexit>I1
<latexit sha1_base64="Dmpt8GEE6+k1qUxrNT+OzN28VIU=">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdVl0U3cV7QNqKcl0WoemSUgmSimCP+BWP038A/0L74xTUIvohCRnzr3nzNx7/TgQqXSc15w1N7+wuJRfLqysrq1vFDe3GmmUJYzXWRREScv3Uh6IkNelkAFvxQn3Rn7Am/7wTMWbtzxJRRReyXHMOyNvEIq+YJ4k6vK863SLJafs6GXPAteAEsyqRcUXXKOHCAwZRuAIIQkH8JDS04YLBzFxHUyISwgJHee4R4G0GWVxyvCIHdJ3QLu2YUPaK89UqxmdEtCbkNLGHmkiyksIq9NsHc+0s2J/855oT3W3Mf194zUiVuKG2L9008z/6lQtEn2c6BoE1RRrRlXHjEumu6Jubn+pSpJDTJzCPYonhJlWTvtsa02qa1e99XT8TWcqVu2Zyc3wrm5JA3Z/jnMWNA7K7lHZvTgsVU7NqPPYwS72aZ7HqKCKGurkPcAjnvBsVa3Qyqy7z1QrZzTb+Lashw+dlo/3</latexit>I0
<latexit sha1_base64="Dmpt8GEE6+k1qUxrNT+OzN28VIU=">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdVl0U3cV7QNqKcl0WoemSUgmSimCP+BWP038A/0L74xTUIvohCRnzr3nzNx7/TgQqXSc15w1N7+wuJRfLqysrq1vFDe3GmmUJYzXWRREScv3Uh6IkNelkAFvxQn3Rn7Am/7wTMWbtzxJRRReyXHMOyNvEIq+YJ4k6vK863SLJafs6GXPAteAEsyqRcUXXKOHCAwZRuAIIQkH8JDS04YLBzFxHUyISwgJHee4R4G0GWVxyvCIHdJ3QLu2YUPaK89UqxmdEtCbkNLGHmkiyksIq9NsHc+0s2J/855oT3W3Mf194zUiVuKG2L9008z/6lQtEn2c6BoE1RRrRlXHjEumu6Jubn+pSpJDTJzCPYonhJlWTvtsa02qa1e99XT8TWcqVu2Zyc3wrm5JA3Z/jnMWNA7K7lHZvTgsVU7NqPPYwS72aZ7HqKCKGurkPcAjnvBsVa3Qyqy7z1QrZzTb+Lashw+dlo/3</latexit>I0<latexit sha1_base64="Dmpt8GEE6+k1qUxrNT+OzN28VIU=">AAACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdVl0U3cV7QNqKcl0WoemSUgmSimCP+BWP038A/0L74xTUIvohCRnzr3nzNx7/TgQqXSc15w1N7+wuJRfLqysrq1vFDe3GmmUJYzXWRREScv3Uh6IkNelkAFvxQn3Rn7Am/7wTMWbtzxJRRReyXHMOyNvEIq+YJ4k6vK863SLJafs6GXPAteAEsyqRcUXXKOHCAwZRuAIIQkH8JDS04YLBzFxHUyISwgJHee4R4G0GWVxyvCIHdJ3QLu2YUPaK89UqxmdEtCbkNLGHmkiyksIq9NsHc+0s2J/855oT3W3Mf194zUiVuKG2L9008z/6lQtEn2c6BoE1RRrRlXHjEumu6Jubn+pSpJDTJzCPYonhJlWTvtsa02qa1e99XT8TWcqVu2Zyc3wrm5JA3Z/jnMWNA7K7lHZvTgsVU7NqPPYwS72aZ7HqKCKGurkPcAjnvBsVa3Qyqy7z1QrZzTb+Lashw+dlo/3</latexit>I0Figure 1. In comparison with existing vision-language models,
CogCoM performs the multiple steps of evidential reasoning with
chain of manipulations (CoM) to achieve the final answer.
The research employing VLMs as foundation models (Bai
et al., 2023; Sun et al., 2023b; Wang et al., 2023b) usually
involves two main stages of training, where the first stage
cultivates intrinsic visual understanding through exposure to
massive image-caption pairs, and the second stage endows
the models with problem-solving capabilities through an
instruction tuning. Some other studies (Dai et al., 2023;
Chen et al., 2023b; Zhang et al., 2023b) directly perform
the second stage for the applicable scenes.
However, existing tuning methods train models to respond
to instructions with conclusive linguistic answers upon vi-
sual inputs, which leads models to ignore the essential visual
reasoning and further results in failures in meticulous vi-
sual problems, unfaithful responses, and even hallucinations.
For example in Figure 1, we test the top performing model
CogVLM (Wang et al., 2023b) about the details in the image
(i.e., texts written on pillar ), and it directly gives an incor-
rect answer ( i.e., NO SMOKING ), most likely from bias to
visual or linguistic priors ( i.e., typical scenes with pillar in
office ). The absence of this evidential reasoning with visual
evidence leads to a rash response (Hwang et al., 2023).
Humans solve the meticulous visual problems by marking
or processing the given images for convenience and rigor,
1arXiv:2402.04236v1 [cs.CV] 6 Feb 2024 |
2301.12652.pdf | REPLUG: Retrieval-Augmented Black-Box Language Models
Weijia Shi,1 *Sewon Min,1Michihiro Yasunaga,2Minjoon Seo,3Rich James,4Mike Lewis,4
Luke Zettlemoyer1 4Wen-tau Yih4
Abstract
We introduce REPLUG, a retrieval-augmented lan-
guage modeling framework that treats the lan-
guage model (LM) as a black box and augments
it with a tuneable retrieval model. Unlike prior
retrieval-augmented LMs that train language mod-
els with special cross attention mechanisms to en-
code the retrieved text, REPLUG simply prepends
retrieved documents to the input for the frozen
black-box LM. This simple design can be eas-
ily applied to any existing retrieval and language
models. Furthermore, we show that the LM can
be used to supervise the retrieval model, which
can then find documents that help the LM make
better predictions. Our experiments demonstrate
thatREPLUG with the tuned retriever significantly
improves the performance of GPT-3 (175B) on
language modeling by 6.3%, as well as the perfor-
mance of Codex on five-shot MMLU by 5.1%.
1. Introduction
Large language models (LLMs) such as GPT-3 (Brown et al.,
2020a) and Codex (Chen et al., 2021a), have demonstrated
impressive performance on a wide range of language tasks.
These models are typically trained on very large datasets and
store a substantial amount of world or domain knowledge
implicitly in their parameters. However, they are also prone
to hallucination and cannot represent the full long tail of
knowledge from the training corpus. Retrieval-augmented
language models (Khandelwal et al., 2020; Borgeaud et al.,
2022; Izacard et al., 2022b; Yasunaga et al., 2022), in con-
trast, can retrieve knowledge from an external datastore
when needed, potentially reducing hallucination and increas-
ing coverage. Previous approaches of retrieval-augmented
language models require access to the internal LM repre-
sentations (e.g., to train the model (Borgeaud et al., 2022;
1University of Washington2Stanford University3KAIST4Meta
AI.
*Work done while the first author was interning at Meta AI.
Correspondence to: Weijia Shi <swj0419@uw.edu>.
Figure 1. Different from previous retrieval-augmented ap-
proaches (Borgeaud et al., 2022) that enhance a language model
with retrieval by updating the LM’s parameters, REPLUG treats
the language model as a black box and augments it with a frozen
or tunable retriever. This black-box assumption makes REPLUG
applicable to large LMs (i.e., >100B parameters), which are often
served via APIs.
Izacard et al., 2022b) or to index the datastore (Khandelwal
et al., 2020)), and are thus difficult to be applied to very
large LMs. In addition, many best-in-class LLMs can only
be accessed through APIs. Internal representations of such
models are not exposed and fine-tuning is not supported.
In this work, we introduce REPLUG (Retrieve and Plug ),
a new retrieval-augmented LM framework where the lan-
guage model is viewed as a black box and the retrieval
component is added as a tuneable plug-and-play module.
Given an input context, REPLUG first retrieves relevant
documents from an external corpus using an off-the-shelf
retrieval model. The retrieved documents are prepended to
the input context and fed into the black-box LM to make
the final prediction. Because the LM context length limits
the number of documents that can be prepended, we also
introduce a new ensemble scheme that encodes the retrieved
documents in parallel with the same black-box LM, allow-
ing us to easily trade compute for accuracy. As shown inarXiv:2301.12652v2 [cs.CL] 1 Feb 2023 |
2311.05020.pdf | First Tragedy, then Parse:
History Repeats Itself in the New Era of Large Language Models
Naomi Saphra
Kempner Institute at Harvard University
nsaphra@fas.harvard.eduEve Fleisig
University of California - Berkeley
efleisig@berkeley.edu
Kyunghyun Cho
New York University & Genentech
kyunghyun.cho@nyu.eduAdam Lopez
University of Edinburgh
alopez@inf.ed.ac.uk
Abstract
Many NLP researchers are experiencing an ex-
istential crisis triggered by the astonishing suc-
cess of ChatGPT and other systems based on
large language models (LLMs). After such a
disruptive change to our understanding of the
field, what is left to do? Taking a historical
lens, we look for guidance from the first era of
LLMs, which began in 2005 with large n-gram
models for machine translation. We identify
durable lessons from the first era, and more
importantly, we identify evergreen problems
where NLP researchers can continue to make
meaningful contributions in areas where LLMs
are ascendant. Among these lessons, we dis-
cuss the primacy of hardware advancement in
shaping the availability and importance of scale,
as well as the urgent challenge of quality eval-
uation, both automated and human. We argue
that disparities in scale are transient and that
researchers can work to reduce them; that data,
rather than hardware, is still a bottleneck for
many meaningful applications; that meaning-
ful evaluation informed by actual use is still an
open problem; and that there is still room for
speculative approaches.
1 Introduction
Picture this scene: A renowned NLP researcher at
a hot seven-year-old tech startup steps onstage to
give a keynote address. The speaker describes an
ambitious new system to the packed room, build-
ing up to the results slide: a bar chart in which the
x-axis shows the number of training words, and the
y-axis shows system accuracy. As each data point
is revealed, performance rises relentlessly, culmi-
nating in a system trained on well over a trillion
words using over a thousand processor cores. It
smashes the state of the art by a margin previously
thought impossible.
Attendees are visibly shaken as they realize, over
the course of a minute, that years of research have
just been rendered utterly inconsequential. Estab-
Figure 1: Results slide (reproduced from Och (2005))
of Franz Och’s keynote talk at the 2005 ACL Workshop
on Building and Using Parallel Texts, a predecessor to
the Conference on Machine Translation.
lished academics panic, anticipating the whole-
sale rejection of already-submitted grant applica-
tions. PhD students despair, contemplating the
irrelevance of their unfinished dissertations. Many
ponder an exit to industry or a change of fields.
They will speak of little else this week.
Does this scene sound like one that might have
happened in the past year? In fact, it happened
18 years ago, in 2005, launching the first era of
Large Language Models (LLMs): the Statistical
Machine Translation (SMT) era .1The speaker,
Franz Och, had co-invented key methods in statisti-
cal machine translation (Och and Ney, 2003; Koehn
et al., 2003; Och, 2003), but had not published new
work since joining Google in 2004, instead reveal-
ing it in an invited talk prior to the launch of Google
Translate.2The provocative results slide from that
talk (Figure 1) shows how Google improved its
SMT system simply by training a phrase-based lan-
guage model on more and more data (Brants et al.,
1The description is based on the vivid recollections of one
of the authors, who was present.
2https://ai.googleblog.com/2006/04/
statistical-machine-translation-live.htmlarXiv:2311.05020v1 [cs.CL] 8 Nov 2023 |
Peebles-Scalable-Diffusion-Models-with-Transformers-ICCV-2023-paper.pdf | Scalable Diffusion Models with Transformers
William Peebles*
UC BerkeleySaining Xie
New York University
Figure 1: Diffusion models with transformer backbones achieve state-of-the-art image quality. We show selected sam-
ples from two of our class-conditional DiT-XL/2 models trained on ImageNet at 512 ⇥512 and 256 ⇥256 resolution.
Abstract
We explore a new class of diffusion models based on the
transformer architecture. We train latent diffusion models
of images, replacing the commonly-used U-Net backbone
with a transformer that operates on latent patches. We an-
alyze the scalability of our Diffusion Transformers (DiTs)
through the lens of forward pass complexity as measured by
Gflops. We find that DiTs with higher Gflops—through in-
creased transformer depth/width or increased number of in-
put tokens—consistently have lower FID. In addition to pos-
sessing good scalability properties, our largest DiT-XL/2
models outperform all prior diffusion models on the class-
conditional ImageNet 512 ⇥512 and 256 ⇥256 benchmarks,
achieving a state-of-the-art FID of 2.27 on the latter.1. Introduction
Machine learning is experiencing a renaissance pow-
ered by transformers. Over the past five years, neural
architectures for natural language processing [ 42,8], vi-
sion [ 10] and several other domains have been subsumed
by transformers [ 60]. Many classes of image-level gener-
ative models remain holdouts to the trend, though—while
transformers see widespread use in autoregressive mod-
els [43,3,6,47], they have seen less adoption in other gen-
erative modeling frameworks. For example, diffusion mod-
els have been at the forefront of recent advances in image
generation [ 9,46]; yet, they all adopt a convolutional U-Net
architecture as the de-facto choice of backbone.
*Work done during an internship at Meta AI, FAIR Team.
Code and project page available here.
This ICCV paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4195
|
hyvarinen05a.pdf | Journal ofMachineLearning Researc h6(2005) 695{709 Submitted 11/04; Revised 3/05; Published 4/05
Estimation ofNon-Normalized Statistical Models
byScore Matching
AapoHyvarinen aapo.hyv arinen@helsinki.fi
Helsinki Institute forInformation Technolo gy(BRU)
Department ofComputer Scienc e
FIN-00014 University ofHelsinki, Finland
Editor: PeterDayan
Abstract
Oneoften wantstoestimate statistical modelswhere theprobabilit ydensit yfunction is
knownonlyuptoamultiplicativ enormalization constan t.Typically ,onethenhastoresort
toMarkovChain MonteCarlo metho ds,orapproximations ofthenormalization constan t.
Here, weproposethatsuchmodelscanbeestimated byminimizing theexpected squared
distance betweenthegradien tofthelog-densit ygivenbythemodelandthegradien tof
thelog-densit yoftheobserv eddata. While theestimation ofthegradien toflog-densit y
function is,inprinciple, averydicult non-parametric problem, weproveasurprising
result thatgivesasimple formulaforthisobjectivefunction. Thedensit yfunction ofthe
observ eddatadoesnotappearinthisformula,whichsimplies toasample average ofa
sumofsomederivativesofthelog-densit ygivenbythemodel.Thevalidityofthemetho d
isdemonstrated onmultivariate Gaussian andindependen tcomponentanalysis models,
andbyestimating anovercomplete ltersetfornatural image data.
Keywords: statistical estimation, non-normalized densities, pseudo-lik elihood,Markov
chainMonteCarlo, contrastiv edivergence
1.Introduction
Inmanycases, probabilistic modelsinmachinelearning, statistics, orsignal processing are
givenintheformofnon-normalized probabilit ydensities. That is,themodelcontainsan
unkno wnnormalization constan twhose computation istoodicult forpractical purposes.
Assume weobserv earandom vectorx2Rnwhichhasaprobabilit ydensit yfunction
(pdf)denoted bypx(:).Wehaveaparametrized densit ymodelp(:;),whereisanm-
dimensional vector ofparameters. Wewanttoestimate theparameterfromx,i.e.we
wanttoapproximatepx(:)byp(:;^)fortheestimated parameter value^.(Weshallhere
consider thecaseofcontinuous-v alued variables only.)
Theproblem weconsider hereisthatweonlyareabletocompute thepdfgivenbythe
modeluptoamultiplicativ econstan tZ():
p(;)=1
Z()q(;):
Thatis,wedoknowthefunctional formofqasananalytical expression (oranyformthat
canbeeasily computed), butwedonotknowhowtoeasily computeZwhichisgivenby
c
2005AapoHyvarinen. |
2403.09611.pdf | MM1: Methods, Analysis & Insights from
Multimodal LLM Pre-training
Brandon McKinzie◦, Zhe Gan◦, Jean-Philippe Fauconnier⋆,
Sam Dodge⋆, Bowen Zhang⋆, Philipp Dufter⋆, Dhruti Shah⋆, Xianzhi Du⋆,
Futang Peng, Floris Weers, Anton Belyi, Haotian Zhang, Karanjeet Singh,
Doug Kang, Hongyu Hè, Max Schwarzer, Tom Gunter, Xiang Kong,
Aonan Zhang, Jianyu Wang, Chong Wang, Nan Du, Tao Lei, Sam Wiseman,
Mark Lee, Zirui Wang, Ruoming Pang, Peter Grasch⋆,
Alexander Toshev†, and Yinfei Yang†
Apple
bmckinzie@apple.com ,zhe.gan@apple.com
◦First authors;⋆Core authors;†Senior authors
Abstract. In this work, we discuss building performant Multimodal
Large Language Models (MLLMs). In particular, we study the impor-
tance of various architecture components and data choices. Through
careful and comprehensive ablations of the image encoder, the vision
language connector, and various pre-training data choices, we identi-
fied several crucial design lessons. For example, we demonstrate that for
large-scale multimodal pre-training using a careful mix of image-caption,
interleaved image-text, and text-only data is crucial for achieving state-
of-the-art (SOTA) few-shot results across multiple benchmarks, com-
pared to other published pre-training results. Further, we show that the
image encoder together with image resolution and the image token count
has substantial impact, while the vision-language connector design is of
comparatively negligible importance. By scaling up the presented recipe,
we build MM1, a family of multimodal models up to 30B parameters,
consisting of both dense models and mixture-of-experts (MoE) variants,
that are SOTA in pre-training metrics and achieve competitive perfor-
mance after supervised fine-tuning on a range of established multimodal
benchmarks. Thanks to large-scale pre-training, MM1 enjoys appealing
properties such as enhanced in-context learning, and multi-image rea-
soning, enabling few-shot chain-of-thought prompting.
1 Introduction
In recent years, the research community has achieved impressive progress in
language modeling and image understanding. Thanks to the availability of large-
scaleimage-textdataandcomputeatscale,wehaveseentheemergenceofhighly
performant Large Language Models (LLMs) [9,10,19,21,26,92,93,103,108,110,
117,130] and Vision Foundation Models [40,88,91] that have become the de-
factostandard for the majority of language and image understanding problems.arXiv:2403.09611v1 [cs.CV] 14 Mar 2024 |
1905.04226.pdf | Language Modeling with Deep Transformers
Kazuki Irie1, Albert Zeyer1,2, Ralf Schl ¨uter1, Hermann Ney1,2
1Human Language Technology and Pattern Recognition Group, Computer Science Department
RWTH Aachen University, 52074 Aachen, Germany
2AppTek GmbH, 52062 Aachen, Germany
{irie, zeyer, schlueter, ney }@cs.rwth-aachen.de
Abstract
We explore deep autoregressive Transformer models in language
modeling for speech recognition. We focus on two aspects.
First, we revisit Transformer model configurations specifically
for language modeling. We show that well configured Trans-
former models outperform our baseline models based on the
shallow stack of LSTM recurrent neural network layers. We
carry out experiments on the open-source LibriSpeech 960hr
task, for both 200K vocabulary word-level and 10K byte-pair
encoding subword-level language modeling. We apply our word-
level models to conventional hybrid speech recognition by lat-
tice rescoring, and the subword-level models to attention based
encoder-decoder models by shallow fusion. Second, we show
that deep Transformer language models do not require positional
encoding. The positional encoding is an essential augmentation
for the self-attention mechanism which is invariant to sequence
ordering. However, in autoregressive setup, as is the case for lan-
guage modeling, the amount of information increases along the
position dimension, which is a positional signal by its own. The
analysis of attention weights shows that deep autoregressive self-
attention models can automatically make use of such positional
information. We find that removing the positional encoding even
slightly improves the performance of these models.
Index Terms : language modeling, self-attention, Transformer,
speech recognition
1. Introduction
Transformer encoder-decoder models [1] have become popular
in natural language processing. The Transformer architecture
allows to successfully train a deep stack of self-attention lay-
ers [2 –4] via residual connections [5] and layer normalization [6].
The positional encodings [1, 7], typically based on sinusoidal
functions, are used to provide the self-attention with the sequence
order information. Across various applications, systematic im-
provements have been reported over the standard, multi-layer
long short-term memory (LSTM) [8] recurrent neural network
based models. While originally designed as an encoder-decoder
architecture in machine translation, the encoder (e.g., [9]) and
thedecoder (e.g., [10]) components are also separately used
in corresponding problems depending on whether the problem
disposes the whole sequence for prediction or not.
A number of recent works have also shown impressive per-
formance in language modeling using the Transformer decoder
component [10 –15]. The earliest example can be found in [10]
where such models are investigated for text generation. Re-
cent works on training larger and deeper models [12, 14, 15]
have shown further potential of the Transformer in language
modeling. On the other hand, an obvious limitation of the Trans-
formers is that their memory requirement linearly increases in
terms of number of tokens in the sequence, which requires to
work with a limited context window (basically a n-gram model
where the typical number for nis 512) for tasks dealing with
long sequences such as character-level language modeling [12].
Dai et al. [11] has introduced a segment-level recurrence andrelative positional encoding in the Transformer language model
to be able to potentially handle unlimited context.
In this work, we investigate deep autoregressive Transform-
ers for language modeling in speech recognition. To be specific,
we focus on two aspects. First, we revisit the parameter configu-
rations of Transformers, originally engineered for the sequence-
to-sequence problem [1], specifically for language modeling. We
conduct experiments on the LibriSpeech automatic speech recog-
nition (ASR) task [16] for both word-level conventional speech
recognition and byte-pair encoding (BPE) [17] level end-to-end
speech recognition [18, 19]. We apply our word-level models to
hybrid speech recognition by lattice rescoring [20], and the BPE-
level models to end-to-end models by shallow fusion [21, 22].
We show that well configured Transformer language models out-
perform models based on the simple stack of LSTM RNN layers
in terms of both perplexity and word error rate (WER).
Second, we experimentally show that the positional encod-
ing is not needed for multi-layer autoregressive self-attention
models. The visualization of the attention weights shows that
when the sinusoidal positional encoding is provided with the
input, the first layer of the Transformers learns to extract n-
gram features (therefore making use of positional information).
However, in the autoregressive problem where a new token is
provided to the model at each time step, the amount of infor-
mation the model has access to strictly increases from left to
right at the lowest level of the network, which should provide
some positional information by its own. We observe that deep
Transformer language models without positional encoding au-
tomatically make use of such information, and even give slight
improvements over models with positional encodings.
2. Related Work
The first part of our work follows the spirits of Al-Rfou et al.’s
work [12] and Radford et al.’s work [14,15] in investigating larger
and deeper Transformers for language modeling. We show that
deep Transformer language models can be successfully applied
to speech recognition and give good performance. The second
part of this work concerns the positional encoding, which is a
crucial component in the original Transformer. A number of pre-
vious work investigated positional encoding variants to improve
self-attention (e.g., [11, 23 –25]). Previous works in Transformer
language models systematically use positional encoding, either
jointly learned one or the sinusoidal one (both cases are reported
to give similar performance in [12]). We show that the deep
autoregressive self-attention models do not require any explicit
model for encoding positions to give the best performance.
3. Autoregressive Self-Attention
The language model we consider is based on the decoder com-
ponent of the Transformer architecture [1]. Similar to previous
work [10 –15], we define layer as a stack of two components:
self-attention andfeed-forward1modules.
1Typically called position-wise feed-forward module [1]. Here we
omit position-wise as it is obvious for autoregressive models.arXiv:1905.04226v2 [cs.CL] 11 Jul 2019 |
10.1038.s41467-023-37023-9.pdf | Article https://doi.org/10.1038/s41467-023-37023-9
Observation of electron orbital signatures of
single atoms within metal-phthalocyaninesusing atomic force microscopy
Pengcheng Chen1,9, Dingxin Fan1,2,9, Annabella Selloni3,E m i l yA .C a r t e r4,5,
Craig B. Arnold1,4, Yunlong Zhang6,A d a mS .G r o s s6,
James R. Chelikowsky2,7,8&N a nY a o1
Resolving the electronic structure of a single atom within a molecule is of
fundamental importance for understanding and predicting chemical andphysical properties of functional molecu les such as molecular catalysts. How-
ever, the observation of the orbital signature of an individual atom is challen-ging. We report here the direct identi fication of two adjacent transition-metal
atoms, Fe and Co, within phthalocyanine molecules using high-resolutionnoncontact atomic force microscopy (HR-AFM). HR-AFM imaging reveals thatthe Co atom is brighter and presents four distinct lobes on the horizontal planewhereas the Fe atom displays a “square ”morphology. Pico-force spectroscopy
measurements show a larger repulsion force of about 5 pN on the tip exertedby Co in comparison to Fe. Our combined ex perimental and theoretical results
demonstrate that both the distinguishable features in AFM images and thevariation in the measured forces arise from Co ’s higher electron orbital occu-
pation above the molecular plane. The ability to directly observe orbital sig-natures using HR-AFM should provide a pr omising approach to characterizing
the electronic structure of an individual atom in a molecular species and tounderstand mechanisms of certain chemical reactions.
Real-space experimental observation of localized electron orbital sig-
natures for individual atoms within complex systems can elucidate how
atoms interact with each other and provide critical information onthe dissociation and formation of chemical bonds needed for identi-fying reaction pathways. However, the direct measurement of theelectronic structure of a single atom or a chemical bond is challenging.Several experimental methods have enabled probing of molecularorbital distributions under certain conditions, including angle-resolvedphotoemission spectroscopy
1,2, high harmonic interferometry3,a n dphotoionization microscopy4. In real space, orbital-related information
can be obtained with scanning tunneling microscopy (STM)5–10,w h i c h
images the spatially resolved local density of states near theFermi level
11. In addition, HR-AFM with molecularly functionalized tips
has been used for quantitative structural measurements on organicmolecules with spectacular atomic resolution
12,13.B o n do r d e r14,15and
heteroatom16,17discrimination, and even real-space imaging of indivi-
dual atoms18,19and intermolecular bonds have been reported20,21.T h e s e
experimental advances have been accompanied by the innovation ofReceived: 3 October 2022
Accepted: 20 February 2023
Check for updates
1Princeton Materials Institute, Princeton University, Princeton, NJ 08540-8211, USA.2McKetta Department of Chemical Engineering, University of Texas at
Austin, Austin, TX 78712-1589, USA.3Department of Chemistry, Princeton University, Princeton, NJ 08544-0001, USA.4Department of Mechanical and
Aerospace Engineering and the Andlinger Center for Energy and the Environment, Princeton University, Princeton, NJ 08544-5263, USA.5Princeton Plasma
Physics Laboratory, Princeton, NJ 08540-6655, USA.6ExxonMobil Technology and Engineering Company, Annandale, NJ 08801-3096, USA.7Department of
Physics, University of Texas at Austin, Austin, TX 78712-1192, USA.8Center for Computational Materials, Oden Institute for Computational Engineering and
Sciences, University of Texas at Austin, Austin, TX 78712-1229, USA.9These authors contributed equally: Pengcheng Chen, Dingxin Fan.
e-mail: jrc@utexas.edu ;nyao@princeton.edu
Nature Communications | (2023) 14:1460 11234567890():,;
1234567890():,; |
Text2Graphics.pdf | Text to Graphics by Program Synthesis with Error Correction
on Precise, Procedural, and Simulation Tasks
Arvind Raghavan
Columbia University
ar4284@columbia.eduZad Chin
Harvard University
zadchin@college.harvard.eduAlexander E. Siemenn
MIT
asiemenn@mit.edu
Vitali Petsiuk
Boston University
vpetsiuk@bu.eduSaisamrit Surbehera
Columbia University
ss6365@columbia.eduYann Hicke
Cornell University
ylh8@cornell.eduEd Chien
Boston University
edchien@bu.edu
Ori Kerret
Ven Commerce
ori@ven.comTonio Buonassisi
MIT
buonassi@mit.eduKate Saenko
Boston University
saenko@bu.eduArmando Solar-Lezama
MIT
armando@csail.mit.edu
Iddo Drori
MIT, Columbia University, Boston University
idrori@mit.edu,idrori@cs.columbia.edu
Abstract
Current text-to-image methods fail on visual tasks that
require precision or a procedural specification. DALL-
E 2 and StableDiffusion fit well into frameworks such as
Photoshop; however, they cannot accomplish precise de-
sign, engineering, or physical simulation tasks. We cor-
rectly perform such tasks by turning them into program-
ming tasks, automatically generating code with the latest
graphics libraries, and then running code to render images.
Code generation models such as Codex often generate er-
rors on complex programs, so we perform local error cor-
rection. Rather than subjectively evaluating results on a
set of prompts, we generate a new multi-task benchmark of
challenge tasks. We demonstrate the applicability of our
approach for precise and procedural rendering and physi-
cal simulations.
1. Introduction
Text-to-image methods such as DALL-E 2 [16] and Sta-
bleDiffusion [18] are remarkably effective at producing cre-
ative and novel images that reflect a user prompt. The re-
sults have captured the popular imagination, and artists are
already using these systems as tools in their creative pro-
cess [17]. However, these methods have various failure
modes that disrupt this process. For example, they oftenfail to produce precise results and respond appropriately to
specified numeracy, positions, spellings, etc. They also do
not allow for simple compositional changes to the image,
such as modifying the color or styling of a particular ob-
ject without manual marking or outlining. Finally, they may
produce images that defy conventional physics.
We address these particular issues by reframing the prob-
lem as one of graphics code generation. When a scene is
specified at the code level, precision in numerics and posi-
tioning is required, objects are instantiated separately and
specified, and realistic physics models are generated. Our
particular model uses Codex [3] to generate Python code
for rendering objects in Blender [6] and further leverages
aprogram synthesis approach, allowing for automatic error
correction and user-guided edits to the output image. Figure
1 shows a schematic of this.
To demonstrate the efficacy of our method, we show
that it outperforms DALL-E 2 and StableDiffusion on tasks
from a recent benchmark designed to test the ability of these
systems to cope with challenging prompts. In summary, our
contributions are listed below:
• We reframe the problem of image generation as one
of graphics code generation, allowing for unparalleled
precision and compositional control over the image
output.
• We leverage program synthesis tools, allowing for au-
tomatic error correction and user-guided modification |
2307.00494.pdf | Optimizing protein fitness using Gibbs sampling with
Graph-based Smoothing
Andrew Kirjner∗
Massachusetts Institute of Technology
kirjner@mit.eduJason Yim∗
Massachusetts Institute of Technology
jyim@mit.edu
Raman Samusevich
IOCB, Czech Academy of Sciences,
CIIRC, Czech Technical University in Prague
raman.samusevich@uochb.cas.czTommi Jaakkola†
Massachusetts Institute of Technology
tommi@csail.mit.edu
Regina Barzilay†
Massachusetts Institute of Technology
regina@csail.mit.eduIla Fiete†
Massachusetts Institute of Technology
fiete@mit.edu
Abstract
The ability to design novel proteins with higher fitness on a given task would be
revolutionary for many fields of medicine. However, brute-force search through
the combinatorially large space of sequences is infeasible. Prior methods constrain
search to a small mutational radius from a reference sequence, but such heuristics
drastically limit the design space. Our work seeks to remove the restriction on mu-
tational distance while enabling efficient exploration. We propose Gibbs sampling
with Graph-based Smoothing (GGS) which iteratively applies Gibbs with gradients
to propose advantageous mutations using graph-based smoothing to remove noisy
gradients that lead to false positives. Our method is state-of-the-art in discovering
high-fitness proteins with up to 8 mutations from the training set. We study the
GFP and AA V design problems, ablations, and baselines to elucidate the results.
Code: https://github.com/kirjner/GGS
1 Introduction
In protein design, fitness is loosely defined as performance on a desired property or function. Ex-
amples of fitness include catalytic activity for enzymes [ 1,21] and fluorescence for biomarkers [ 29].
Protein engineering seeks to design proteins with high fitness by altering the underlying sequences of
amino acids. However, the number of possible proteins increases exponentially with sequence length,
rendering it infeasible to perform brute-force search to engineer novel functions which often requires
many mutations (i.e. at least 3 [ 12]). Directed evolution [ 3] has been successful in improving protein
fitness, but it requires substantial labor and time to gradually explore many mutations.
We aim to find shortcuts to generate high-fitness proteins that are many mutations away from what is
known but face several challenges. Proteins are notorious for highly non-smooth fitness landscapes:3
fitness can change dramatically with just a single mutation, and most protein sequences have zero
∗Contributed equally to this work. Authors agreed ordering can be changed for their respective interests.
†Advised equally to this work.
3Landscape refers to the mapping from sequence to fitness.
Preprint. Under review.arXiv:2307.00494v1 [q-bio.BM] 2 Jul 2023 |
2302.05442.pdf | Scaling Vision Transformers to 22 Billion Parameters
Mostafa Dehghani∗Josip Djolonga∗Basil Mustafa∗Piotr Padlewski∗Jonathan Heek∗
Justin Gilmer Andreas Steiner Mathilde Caron Robert Geirhos Ibrahim Alabdulmohsin
Rodolphe Jenatton Lucas Beyer Michael Tschannen Anurag Arnab Xiao Wang
Carlos Riquelme Matthias Minderer Joan Puigcerver Utku Evci Manoj Kumar
Sjoerd van Steenkiste Gamaleldin F. Elsayed Aravindh Mahendran Fisher Yu
Avital Oliver Fantine Huot Jasmijn Bastings Mark Patrick Collier Alexey A. Gritsenko
Vighnesh Birodkar Cristina Vasconcelos Yi Tay Thomas Mensink Alexander Kolesnikov
Filip Pavetić Dustin Tran Thomas Kipf Mario Lučić Xiaohua Zhai Daniel Keysers
Jeremiah Harmsen Neil Houlsby∗
Google Research
Abstract
The scalingof Transformers has drivenbreakthrough capabilities forlanguagemodels. At present,the
largestlargelanguagemodels(LLMs)containupwardsof100Bparameters. VisionTransformers(ViT)have
introducedthesamearchitecturetoimageandvideomodelling,butthesehavenotyetbeensuccessfully
scaled to nearly the same degree; the largest dense ViT contains 4B parameters (Chen et al., 2022). We
present a recipe for highly efficient and stable training of a 22B-parameter ViT ( ViT-22B) and perform a
widevarietyofexperimentsontheresultingmodel. Whenevaluatedondownstreamtasks(oftenwitha
lightweight linear model on frozen features), ViT-22Bdemonstrates increasing performance with scale. We
further observe other interesting benefits of scale, including an improved tradeoff between fairness and
performance, state-of-the-art alignment to human visual perception in terms of shape/texture bias, and
improvedrobustness. ViT-22Bdemonstratesthepotentialfor“LLM-like”scalinginvision,andprovides
key steps towards getting there.
1 Introduction
Similar to natural language processing, transfer of pre-trained vision backbones has improved performance
on a wide variety of vision tasks (Pan and Yang, 2010; Zhai et al., 2019; Kolesnikov et al., 2020). Larger
datasets, scalable architectures, and new training methods (Mahajan et al., 2018; Dosovitskiy et al., 2021;
Radford et al., 2021; Zhai et al., 2022a) have accelerated this growth. Despite this, vision models have trailed
far behind language models, which have demonstrated emergent capabilities at massive scales (Chowdhery
et al., 2022; Wei et al., 2022). Specifically, the largest dense vision model to date is a mere 4B parameter
ViT(Chenetal.,2022),whileamodestlyparameterizedmodelforanentry-levelcompetitivelanguagemodel
typically contains over 10B parameters (Raffel et al., 2019; Tay et al., 2022; Chung et al., 2022), and the largest
dense language model has 540B parameters (Chowdhery et al., 2022). Sparse models demonstrate the same
trend, wherelanguage modelsgo beyond atrillion parameters(Fedus etal., 2021)but thelargestreported
sparse vision models are only 15B (Riquelme et al., 2021).
Thispaperpresents ViT-22B,thelargestdenseViTmodeltodate. Enrouteto22Bparameters,weuncover
pathologicaltraininginstabilitieswhichpreventscalingthedefaultrecipe,anddemonstratearchitectural
changeswhichmakeitpossible. Further,wecarefullyengineerthemodeltoenablemodel-paralleltrainingat
unprecedentedefficiency. ViT-22B’squalityisassessedviaacomprehensiveevaluationsuiteoftasks,ranging
from (few-shot) classification to dense output tasks, where it reaches or advances the current state-of-the-art.
For example, even when used as a frozen visual feature extractor, ViT-22Bachieves an accuracy of 89.5% on
∗Core contributors. Correspondence: dehghani@google.com
1arXiv:2302.05442v1 [cs.CV] 10 Feb 2023 |
2103.00020.pdf | Learning Transferable Visual Models From Natural Language Supervision
Alec Radford* 1Jong Wook Kim* 1Chris Hallacy1Aditya Ramesh1Gabriel Goh1Sandhini Agarwal1
Girish Sastry1Amanda Askell1Pamela Mishkin1Jack Clark1Gretchen Krueger1Ilya Sutskever1
Abstract
State-of-the-art computer vision systems are
trained to predict a fixed set of predetermined
object categories. This restricted form of super-
vision limits their generality and usability since
additional labeled data is needed to specify any
other visual concept. Learning directly from raw
text about images is a promising alternative which
leverages a much broader source of supervision.
We demonstrate that the simple pre-training task
of predicting which caption goes with which im-
age is an efficient and scalable way to learn SOTA
image representations from scratch on a dataset
of 400 million (image, text) pairs collected from
the internet. After pre-training, natural language
is used to reference learned visual concepts (or
describe new ones) enabling zero-shot transfer
of the model to downstream tasks. We study
the performance of this approach by benchmark-
ing on over 30 different existing computer vi-
sion datasets, spanning tasks such as OCR, ac-
tion recognition in videos, geo-localization, and
many types of fine-grained object classification.
The model transfers non-trivially to most tasks
and is often competitive with a fully supervised
baseline without the need for any dataset spe-
cific training. For instance, we match the ac-
curacy of the original ResNet-50 on ImageNet
zero-shot without needing to use any of the 1.28
million training examples it was trained on. We
release our code and pre-trained model weights at
https://github.com/OpenAI/CLIP .
1. Introduction and Motivating Work
Pre-training methods which learn directly from raw text
have revolutionized NLP over the last few years (Dai &
Le, 2015; Peters et al., 2018; Howard & Ruder, 2018; Rad-
ford et al., 2018; Devlin et al., 2018; Raffel et al., 2019).
*Equal contribution1OpenAI, San Francisco, CA 94110, USA.
Correspondence to: <{alec, jongwook}@openai.com >.Task-agnostic objectives such as autoregressive and masked
language modeling have scaled across many orders of mag-
nitude in compute, model capacity, and data, steadily im-
proving capabilities. The development of “text-to-text” as
a standardized input-output interface (McCann et al., 2018;
Radford et al., 2019; Raffel et al., 2019) has enabled task-
agnostic architectures to zero-shot transfer to downstream
datasets removing the need for specialized output heads or
dataset specific customization. Flagship systems like GPT-3
(Brown et al., 2020) are now competitive across many tasks
with bespoke models while requiring little to no dataset
specific training data.
These results suggest that the aggregate supervision acces-
sible to modern pre-training methods within web-scale col-
lections of text surpasses that of high-quality crowd-labeled
NLP datasets. However, in other fields such as computer
vision it is still standard practice to pre-train models on
crowd-labeled datasets such as ImageNet (Deng et al., 2009).
Could scalable pre-training methods which learn directly
from web text result in a similar breakthrough in computer
vision? Prior work is encouraging.
Over 20 years ago Mori et al. (1999) explored improving
content based image retrieval by training a model to pre-
dict the nouns and adjectives in text documents paired with
images. Quattoni et al. (2007) demonstrated it was possi-
ble to learn more data efficient image representations via
manifold learning in the weight space of classifiers trained
to predict words in captions associated with images. Sri-
vastava & Salakhutdinov (2012) explored deep represen-
tation learning by training multimodal Deep Boltzmann
Machines on top of low-level image and text tag features.
Joulin et al. (2016) modernized this line of work and demon-
strated that CNNs trained to predict words in image cap-
tions learn useful image representations. They converted
the title, description, and hashtag metadata of images in the
YFCC100M dataset (Thomee et al., 2016) into a bag-of-
words multi-label classification task and showed that pre-
training AlexNet (Krizhevsky et al., 2012) to predict these
labels learned representations which preformed similarly
to ImageNet-based pre-training on transfer tasks. Li et al.
(2017) then extended this approach to predicting phrase n-
grams in addition to individual words and demonstrated the
ability of their system to zero-shot transfer to other imagearXiv:2103.00020v1 [cs.CV] 26 Feb 2021 |
2304.10464.pdf | Learning to Program with Natural Language
Yiduo Guo1, Yaobo Liang2, Chenfei Wu2, Wenshan Wu2, Dongyan Zhao1, Duan Nan2
1Wangxuan Institute of Computer Technology, Peking University,2Microsoft Research, Asia
yiduo@stu.pku.edu.cn, zhaodongyan@pku.edu.cn
{yaobo.liang, chenfei.wu, wenshan.wu, nanduan}@microsoft.com
Abstract
Large Language Models (LLMs) have shown remarkable performance in various
basic natural language tasks, which raises hopes for achieving Artificial General
Intelligence. To better complete complex tasks, we need LLMs to program for
the task and then follow the program to generate a specific solution for the test
sample. We propose using natural language as a new programming language to
describe task procedures, making them easily understandable to both humans and
LLMs. The LLM is capable of directly generating natural language programs, but
these programs may still contain factual errors or incomplete steps. Therefore, we
further propose the Learning to Program ( LP) method to ask LLMs themselves
to learn natural language programs from the training dataset of complex tasks
and then use the learned program to guide inference. Our experiments on the
AMPS (high school math) and Math (competition mathematics problems) datasets
demonstrate the effectiveness of our approach. When testing ChatGPT on 10
tasks from the AMPS dataset, our LPmethod’s average performance outperformed
the direct zero-shot test performance by 18.3 %. We release our code at https:
//github.com/microsoft/NaturalLanguageProgram .
1 Introduction
Large Language Models (LLMs), such as ChatGPT and GPT-4 [17], have recently achieved strong
zero-shot/few-shot performance on various natural language tasks, such as generating passages [1],
generating code [14], and solving grade school math problems [19]. LLMs can further learn new
basic abilities by connecting them with millions of APIs like TaskMatrix.AI [13, 26] or new tools
like ToolFormer [21]. However, LLMs still struggle to complete complex tasks, such as writing
a long novel [27], coding for a large project [18], and solving complex math problems [5]. This
indicates that knowing every basic capability is insufficient to complete complex tasks - we also
require a procedure for how to combine them. Like programmers who use programming languages to
teach computers how to complete complex tasks, we propose leveraging natural language as the new
programming language to teach LLMs how to complete complex tasks.
To better control the inference process for complex tasks, we propose to make the inference process
into two steps explicitly: In the first step, we find a natural language program pfor the test sample.
In the second step, we use the test sample, prompt, and natural language program pas the input
to LLMs to generate the specific solution. Natural Language Programs can guide LLMs by first
analyzing all possible cases and then decomposing the complex task into sub-tasks, sequentially
employing functions to complete the task. These programs provide general solutions that explain the
generic steps to solve the task, including which basic capability to use and the necessary background
knowledge. For example, if the task is to calculate the sine value of an angle in a right triangle,
given the lengths of its legs, the natural language program for this task would be as follows: First,
use the Pythagorean theorem to calculate the length of the leg whose length is not given. Then,
calculate the sine value of the target angle based on its definition. Natural language programs offer
two key advantages: (1) generalizability, as they can guide LLMs to solve all questions similar to thearXiv:2304.10464v1 [cs.CL] 20 Apr 2023 |
2112.11446.pdf | 2021-12-08
Scaling Language Models: Methods, Analysis
& Insights from Training Gopher
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides,
Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer,
Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese,
Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell,
Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland,
Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh,
Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli,
Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama,
Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas,
Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel,
William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway,
Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu and Geoffrey Irving
Language modelling provides a step towards intelligent communication systems by harnessing large
repositories of written human knowledge to better predict and understand the world. In this paper, we
present an analysis of Transformer-based language model performance across a wide range of model
scales — from models with tens of millions of parameters up to a 280 billion parameter model called
Gopher. Thesemodelsareevaluatedon152diversetasks,achievingstate-of-the-artperformanceacross
the majority. Gains from scale are largest in areas such as reading comprehension, fact-checking, and
theidentificationoftoxiclanguage,butlogicalandmathematicalreasoningseelessbenefit. Weprovide
a holistic analysis of the training dataset and model’s behaviour, covering the intersection of model
scale with bias and toxicity. Finally we discuss the application of language models to AI safety and the
mitigation of downstream harms.
Keywords: Natural Language Processing, Language Models, Deep Learning
Contents
1 Introduction 3
2 Background 5
3 Method 5
3.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Training Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Results 7
4.1 Task Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Comparisons with State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Corresponding authors: jack.w.rae@gmail.com, geoffreyi@deepmind.com
©2022 DeepMind. All rights reservedarXiv:2112.11446v2 [cs.CL] 21 Jan 2022 |
2310.06825.pdf | Mistral 7B
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford,
Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel,
Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux,
Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix,
William El Sayed
Abstract
We introduce Mistral 7B, a 7–billion-parameter language model engineered for
superior performance and efficiency. Mistral 7B outperforms the best open 13B
model (Llama 2) across all evaluated benchmarks, and the best released 34B
model (Llama 1) in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding
window attention (SWA) to effectively handle sequences of arbitrary length with a
reduced inference cost. We also provide a model fine-tuned to follow instructions,
Mistral 7B – Instruct, that surpasses Llama 2 13B – chat model both on human and
automated benchmarks. Our models are released under the Apache 2.0 license.
Code: https://github.com/mistralai/mistral-src
Webpage: https://mistral.ai/news/announcing-mistral-7b/
1 Introduction
In the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model
performance often necessitates an escalation in model size. However, this scaling tends to increase
computational costs and inference latency, thereby raising barriers to deployment in practical,
real-world scenarios. In this context, the search for balanced models delivering both high-level
performance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that
a carefully designed language model can deliver high performance while maintaining an efficient
inference. Mistral 7B outperforms the previous best 13B model (Llama 2, [ 26]) across all tested
benchmarks, and surpasses the best 34B model (LLaMa 34B, [ 25]) in mathematics and code
generation. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [ 20],
without sacrificing performance on non-code related benchmarks.
Mistral 7B leverages grouped-query attention (GQA) [ 1], and sliding window attention (SWA) [ 6,3].
GQA significantly accelerates the inference speed, and also reduces the memory requirement during
decoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time
applications. In addition, SWA is designed to handle longer sequences more effectively at a reduced
computational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms
collectively contribute to the enhanced performance and efficiency of Mistral 7B.arXiv:2310.06825v1 [cs.CL] 10 Oct 2023 |
2304.06762.pdf | Shall We Pretrain Autoregressive Language Models with Retrieval?
A Comprehensive Study
Boxin Wang∗‡1Wei Ping∗†2Peng Xu∗2Lawrence McAfee2
Zihan Liu2Mohammad Shoeybi2Yi Dong2Oleksii Kuchaiev2
Bo Li1Chaowei Xiao2,3Anima Anandkumar2Bryan Catanzaro2
Abstract
Large decoder-only language models (LMs)
can be largely improved in terms of perplex-
ity by retrieval ( e.g., RETRO ), but its impact
on text generation quality and downstream
task accuracy is unclear. Thus, it is still an
open question: shall we pretrain large au-
toregressive LMs with retrieval? To answer
it, we perform a comprehensive study on a
scalable pretrained retrieval-augmented LM
(i.e., R ETRO ) compared with standard GPT
and retrieval-augmented GPT incorporated at
fine-tuning or inference stages. We first pro-
vide the recipe to reproduce R ETRO up to
9.5B parameters while retrieving a text corpus
with 330B tokens. Based on that, we have
the following novel findings: i)RETRO out-
performs GPT on text generation with much
less degeneration (i.e., repetition), moderately
higher factual accuracy, and slightly lower
toxicity with a nontoxic retrieval database.
ii)On the LM Evaluation Harness bench-
mark, R ETRO largely outperforms GPT on
knowledge-intensive tasks, but is on par with
GPT on other tasks. Furthermore, we intro-
duce a simple variant of the model, R ETRO ++,
which largely improves open-domain QA re-
sults of original R ETRO (e.g., EM score +8.6
on Natural Question) and significantly outper-
forms retrieval-augmented GPT across differ-
ent model sizes. Our findings highlight the
promising direction of pretraining autoregres-
sive LMs with retrieval as future foundation
models. We release our implementation at:
https://github.com/NVIDIA/Megatron
-LM#retro .
1 Introduction
Large language models (LMs), including masked
LMs (e.g., BERT (Devlin et al., 2018)), autore-
gressive LMs (e.g., GPT (Brown et al., 2020)),
and encoder-decoder LMs (e.g., T5 (Raffel et al.,
∗Equal contribution. ‡Work done during an internship at
NVIDIA.1UIUC.2NVIDIA.3ASU. †Correspondence to:
Wei Ping <wping@nvidia.com>2020), BART (Lewis et al., 2020a)), have ob-
tained state-of-the-art results for various NLP tasks.
Among them, the autoregressive LMs like GPT-
3 (Brown et al., 2020) and GPT-4 (OpenAI, 2023)
demonstrate noticeable in-context learning abil-
ity and excellent long-form text generation results.
Due to its importance, the community has spent
considerable efforts to scale up such autoregres-
sive generative LMs with more data and param-
eters and observed significant breakthroughs in
a variety of real-world applications (e.g., Brown
et al., 2020), including open-ended text genera-
tion and various downstream tasks (e.g., ques-
tion answering). The successful public exam-
ples include GPT-3 (w/ 170B parameters) (Brown
et al., 2020), Gopher (280B) (Rae et al., 2021),
Megatron-Turing (530B) (Smith et al., 2022), and
PaLM (540B) (Chowdhery et al., 2022).
Although large-scale autoregressive LMs have
achieved huge successes, they also suffer from sev-
eral weaknesses. First, it requires a huge number
of model parameters to memorize the world knowl-
edge, which makes it costly for deployment. Sec-
ond, it is difficult to safeguard factual accuracy,
which may provide users with incorrect informa-
tion (Lee et al., 2022). Third, it is expensive to
update the model knowledge learned during pre-
training with up-to-date facts (Meng et al., 2022),
yielding outdated answers (Lewis et al., 2020b).
To mitigate the problems above, one line of
research proposes to improve language models
with retrieval. The retrieval process can be inte-
grated into LMs at: i)fine-tuning stage (Karpukhin
et al., 2020; Lewis et al., 2020b; Guu et al., 2020),
orii)pretraining stage (Borgeaud et al., 2022;
Izacard et al., 2022). Most previous work aug-
ments BERT or encoder-decoder LMs with re-
trieval at fine-tuning stage, demonstrating suc-
cesses for knowledge-intensive NLP tasks (Guu
et al., 2020; Karpukhin et al., 2020; Lewis et al.,
2020b; Khandelwal et al., 2020). However, it re-arXiv:2304.06762v1 [cs.CL] 13 Apr 2023 |
2404.12096.pdf | LONG EMBED : EXTENDING EMBEDDING MODELS FOR
LONG CONTEXT RETRIEVAL
Dawei Zhu∗ηLiang WangπNan YangπYifan SongηWenhao Wuη
Furu WeiπSujian Liη
ηPeking UniversityπMicrosoft Corporation
https://github.com/dwzhu-pku/LongEmbed
ABSTRACT
Embedding models play a pivot role in modern NLP applications such as IR and
RAG. While the context limit of LLMs has been pushed beyond 1 million tokens,
embedding models are still confined to a narrow context window not exceeding
8k tokens, refrained from application scenarios requiring long inputs such as legal
contracts. This paper explores context window extension of existing embedding
models, pushing the limit to 32k without requiring additional training. First, we
examine the performance of current embedding models for long context retrieval
on our newly constructed LONGEMBED benchmark. LONGEMBED comprises two
synthetic tasks and four carefully chosen real-world tasks, featuring documents of
varying length and dispersed target information. Benchmarking results underscore
huge room for improvement in these models. Based on this, comprehensive exper-
iments show that training-free context window extension strategies like position
interpolation can effectively extend the context window of existing embedding
models by several folds, regardless of their original context being 512 or beyond
4k. Furthermore, for models employing absolute position encoding (APE), we
show the possibility of further fine-tuning to harvest notable performance gains
while strictly preserving original behavior for short inputs. For models using rotary
position embedding (RoPE), significant enhancements are observed when em-
ploying RoPE-specific methods, such as NTK and SelfExtend, indicating RoPE’s
superiority over APE for context window extension. To facilitate future research,
we release E5 Base-4k and E5-RoPE Base, along with the L ONG EMBED benchmark.
QA
SyntheticLongEmbed
Needle Pass keySummScreenFDNarrativeQAQMSum
2WikimQASummarization
(a)
.25k .5k1k2k4k8k16k 32kContriever
GTE
E5
E5 +Tuning
E5-RoPE
E5-RoPE +SE
Jina-V2
BGE-M3
Ada-002
E5-Mistral
E5-Mistral+NTKAcc. on Passkey Test
(b)
30405060708090
0.1k 1k 10k 100k E5-RoPEE5E5-MistralE5-Mistral +NTK
E5 +TuningE5-RoPE +SEAvg. Score on LongEmbed
512 4k 32k (c)
Figure 1: (a)Overview of the LONG EMBED benchmark. (b)Performance of current embedding
models on passkey retrieval, with evaluation length ranging from 256 to 32,768.1▲/♦denotes
embedding models with 512 / ≥4k context. The greener a cell is, the higher retrieval accuracy
this model achieves on the corresponding evaluation length. (c)Effects of context window extension
methods on E5, E5-RoPE, E5-Mistral, measured by improvements of Avg. Scores on LONGEMBED .
SE / NTK is short for SelfExtend / NTK-Aware Interpolation.
∗Work done during Dawei’s internship at MSR Asia. Prof. Sujian Li is the corresponding author.
1For simplicity, we report results from the base versions of the included models by default.
1arXiv:2404.12096v1 [cs.CL] 18 Apr 2024 |
2103.06874.pdf | CANINE : Pre-training an Efficient Tokenization-Free Encoder
for Language Representation
Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting
Google Research
{jhclark,dhgarrette,iuliaturc,jwieting}@google.com
Abstract
Pipelined NLP systems have largely been
superseded by end-to-end neural model-
ing, yet nearly all commonly-used models
still require an explicit tokenization step.
While recent tokenization approaches based
on data-derived subword lexicons are less
brittle than manually engineered tokenizers,
these techniques are not equally suited to all
languages, and the use of any fixed vocab-
ulary may limit a model’s ability to adapt.
In this paper, we present C ANINE , a neural
encoder that operates directly on character
sequences—without explicit tokenization or
vocabulary—and a pre-training strategy that
operates either directly on characters or op-
tionally uses subwords as a soft inductive
bias. To use its finer-grained input ef-
fectively and efficiently, C ANINE combines
downsampling, which reduces the input se-
quence length, with a deep transformer
stack, which encodes context. C ANINE out-
performs a comparable mB ERT model by
5.7 F1 on T YDIQA, a challenging mul-
tilingual benchmark, despite having fewer
model parameters.
1 Introduction
End-to-end neural models have generally replaced
the traditional NLP pipeline, and with it, the
error cascades and feature engineering common
to such systems, preferring instead to let the
model automatically induce its own sophisticated
representations. Tokenization, however, is one
of few holdovers from that era, with nearly all
commonly-used models today requiring an ex-
plicit preprocessing stage to segment a raw text
CANINE :Character Architecture with No tokenization
InNeural Encoders.
Code and checkpoints are available on GitHub at
http://caninemodel.page.link/code .
Published in Transactions of the Association for Compu-
tational Linguistics (TACL) , 2022.string into a sequence of discrete model inputs.
Broadly speaking, tokenizers are generally either
carefully constructed systems of language-specific
rules, which are costly, requiring both manual
feature engineering and linguistic expertise, or
data-driven algorithms such as Byte Pair Encod-
ing (Sennrich et al., 2016), WordPiece (Wu et al.,
2016), or SentencePiece (Kudo and Richardson,
2018) that split strings based on frequencies in a
corpus, which are less brittle and easier to scale,
but are ultimately too simplistic to properly handle
the wide range of linguistic phenomena that can’t
be captured by mere string-splitting (§2.1).
The degree of sophistication required to accu-
rately capture the full breadth of linguistic phe-
nomena, along with the infeasibility of writing
such rules by hand across all languages and do-
mains, suggests that explicit tokenization itself is
problematic. In contrast, an end-to-end model
that operates directly on raw text strings would
avoid these issues, instead learning to compose in-
dividual characters into its own arbitrarily com-
plex features, with potential benefits for both ac-
curacy and ease of use. While this change is con-
ceptually very simple—one could replace the sub-
word vocabulary in a model like B ERT (Devlin
et al., 2019) with a vocabulary made solely of indi-
vidual characters—doing so leads to two immedi-
ate problems. First, the computational complexity
of a transformer (Vaswani et al., 2017), the main
components in B ERTas well as other models such
as GPT (Radford et al., 2019; Brown et al., 2020)
and T5 (Raffel et al., 2020), grows quadratically
with the length of the input. Since standard sub-
word models have roughly four characters per sub-
word on average, the 4x increase in input sequence
length would result is a significantly slower model.
Second, simply switching to a character vocabu-
lary yields empirically poor results (§4.2).
In order to enable tokenization-free model-
ing that overcomes these obstacles, we presentarXiv:2103.06874v4 [cs.CL] 18 May 2022 |
1810.12885.pdf | ReCoCoRD: Bridging the Gap between Human
and Machine Commonsense Reading Comprehension
Sheng Zhang†∗, Xiaodong Liu‡, Jingjing Liu‡, Jianfeng Gao‡,
Kevin Duh†and Benjamin Van Durme†
†Johns Hopkins University
‡Microsoft Research
Abstract
We present a large-scale dataset, ReCoRD,
for machine reading comprehension requiring
commonsense reasoning. Experiments on this
dataset demonstrate that the performance of
state-of-the-art MRC systems fall far behind
human performance. ReCoRDrepresents a
challenge for future research to bridge the gap
between human and machine commonsense
reading comprehension. ReCoRDis available
athttp://nlp.jhu.edu/record .
1 Introduction
Machine reading comprehension (MRC) is a cen-
tral task in natural language understanding, with
techniques lately driven by a surge of large-scale
datasets (Hermann et al., 2015; Hill et al., 2015;
Rajpurkar et al., 2016; Trischler et al., 2017;
Nguyen et al., 2016), usually formalized as a task
of answering questions given a passage. An in-
creasing number of analyses (Jia and Liang, 2017;
Rajpurkar et al., 2018; Kaushik and Lipton, 2018)
have revealed that a large portion of questions in
these datasets can be answered by simply match-
ing the patterns between the question and the an-
swer sentence in the passage. While systems
may match or even outperform humans on these
datasets, our intuition suggests that there are at
least some instances in human reading compre-
hension that require more than what existing chal-
lenge tasks are emphasizing. One primary type
of questions these datasets lack are the ones that
require reasoning over common sense or under-
standing across multiple sentences in the pas-
sage (Rajpurkar et al., 2016; Trischler et al., 2017).
To overcome this limitation, we introduce
a large-scale dataset for reading comprehen-
sion, ReCoRD (["rEk@rd] ), which consists of
over 120,000 examples, most of which require
∗Work done when Sheng Zhang was visiting Microsoft.
Passage
(Cloze-style) QueryAccording to claims in the suit, "Parts of 'Stairway to Heaven,' instantly recognizable to the music fans across the world, sound almost identical to significant portions of ‘X.’”Reference AnswersTaurus(CNN) -- A lawsuit has been filed claiming that the iconic Led Zeppelin song "Stairway to Heaven" was far from original. The suit, filed on May 31 in the United States District Court Eastern District of Pennsylvania, was brought by the estate of the late musician Randy California against the surviving members of Led Zeppelin and their record label. The copyright infringement case alleges that the Zeppelin song was taken from the single "Taurus" by the 1960s band Spirit, for whom California served as lead guitarist. "Late in 1968, a then new band named Led Zeppelin began touring in the United States, opening for Spirit," the suit states. "It was during this time that Jimmy Page, Led Zeppelin's guitarist, grew familiar with 'Taurus' and the rest of Spirit's catalog. Page stated in interviews that he found Spirit to be 'very good' and that the band's performances struck him 'on an emotional level.' "• Suit claims similarities between two songs•Randy California was guitarist for the group Spirit•Jimmy Page has called the accusation "ridiculous"Figure 1: An example from ReCoRD. The passage is
a snippet from a news article followed by some bullet
points which summarize the news event. Named enti-
ties highlighted in the passage are possible answers to
the query. The query is a statement that is factually
supported by the passage. Xin the statement indicates
a missing named entity. The goal is to find the correct
entity in the passage that best fits X.
deep commonsense reasoning. ReCoRDis an
acronym for the Reading Comprehension with
Commonsense Reasoning Dataset.
Figure 1 shows a ReCoRDexample: the pas-
sage describes a lawsuit claiming that the band
“Led Zeppelin ” had plagiarized the song “ Taurus ”arXiv:1810.12885v1 [cs.CL] 30 Oct 2018 |
2209.00626.pdf | arXiv:2209.00626v4 [cs.AI] 22 Feb 2023The Alignment Problem from a Deep Learning Perspective
Richard Ngo
OpenAI
richard@openai.comLawrence Chan
UC Berkeley (EECS)
chanlaw@berkeley.eduSören Mindermann
University of Oxford (CS)
soren.mindermann@cs.ox.ac.uk
Abstract
Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide
range of important tasks. We outline a case for expecting tha t, without substantial effort to prevent it, AGIs
could learn to pursue goals which are undesirable (i.e. misa ligned) from a human perspective. We argue that
if AGIs are trained in ways similar to today’s most capable mo dels, they could learn to act deceptively to
receive higher reward, learn internally-represented goal s which generalize beyond their training distributions,
and pursue those goals using power-seeking strategies. We o utline how the deployment of misaligned AGIs
might irreversibly undermine human control over the world, and briefly review research directions aimed at
preventing this outcome.
1 Introduction
Over the last decade, advances in deep learning have led to th e development of large neural networks with impressive
capabilities in a wide range of domains. In addition to reach ing human-level performance on complex games like
StarCraft 2 [ Vinyals et al. ,2019 ] and Diplomacy [ Bakhtin et al. ,2022 ], large neural networks show evidence of
increasing generality [ Bommasani et al. ,2021 ], including advances in sample efficiency [ Brown et al. ,2020 ,Dorner ,
2021 ], cross-task generalization [ Adam et al. ,2021 ], and multi-step reasoning [ Chowdhery et al. ,2022 ]. The rapid
pace of these advances highlights the possibility that, wit hin the coming decades, we may develop artificial general
intelligence (AGI)—that is, AI which can apply domain-gene ral cognitive skills (such as reasoning, memory, and
planning) to perform at or above human level on a wide range of cognitive tasks relevant to the real world (such as
writing software, formulating new scientific theories, or r unning a company) [ Goertzel ,2014 ].1This possibility is the
aim of major research efforts [ OpenAI ,2023a ,DeepMind ,2023 ] and is taken seriously by leading ML researchers,
who in two recent surveys gave median estimates of 2061 and 20 59 for the year in which AI will outperform humans
at all tasks—although some expect this to occur much sooner o r later [ Grace et al. ,2018 ,Stein-Perlman et al. ,2022 ].2
The development of AGI could unlock many opportunities, but also comes with serious risks. One concern is known
as the alignment problem : the challenge of ensuring that AI systems pursue goals that match human values or interests
rather than unintended and undesirable goals [ Russell ,2019 ,Gabriel ,2020 ,Hendrycks et al. ,2020 ]. An increasing
body of research aims to proactively address the alignment p roblem, motivated in large part by the desire to avoid
hypothesized large-scale tail risks from AGIs that pursue u nintended goals [ OpenAI ,2023b ,Hendrycks and Mazeika ,
2022 ,Amodei et al. ,2016 ,Hendrycks et al. ,2021 ].
Previous writings have argued that AGIs will be highly chall enging to robustly align, and that misaligned AGIs may
pose accident risks on a sufficiently large scale to threaten human civilization [ Russell ,2019 ,Bostrom ,2014 ,Yud-
kowsky ,2016 ,Carlsmith ,2022 ,Cohen et al. ,2022 ]. However, most of these writings only formulate their argu ments
in terms of abstract high-level concepts (particularly con cepts from classical AI), without grounding them in modern
machine learning techniques, while writings that focus on d eep learning techniques do so very informally, and with
little engagement with the deep learning literature [ Ngo,2020 ,Cotra ,2022 ]. This raises the question of whether
there are versions of these arguments which are relevant to, and empirically supported by, the modern deep learning
paradigm.
In this position paper, we hypothesize and defend factors th at could lead to large-scale risks if AGIs are trained using
modern deep learning techniques. Specifically, we argue tha t pretraining AGIs using self-supervised learning and fine-
tuning them using reinforcement learning from human feedba ck (RLHF) [ Christiano et al. ,2017 ] will plausibly lead
to the emergence of three key properties. First, RLHF allows the possibility of situationally-aware reward hacking |
2209.15571.pdf | Published as a conference paper at ICLR 2023
BUILDING NORMALIZING FLOWS WITH STOCHASTIC
INTERPOLANTS
Michael S. Albergo
Center for Cosmology and Particle Physics
New York University
New York, NY 10003, USA
albergo@nyu.eduEric Vanden-Eijnden
Courant Institute of Mathematical Sciences
New York University
New York, NY 10012, USA
eve2@cims.nyu.edu
ABSTRACT
A generative model based on a continuous-time normalizing flow between any
pair of base and target probability densities is proposed. The velocity field of this
flow is inferred from the probability current of a time-dependent density that in-
terpolates between the base and the target in finite time. Unlike conventional nor-
malizing flow inference methods based the maximum likelihood principle, which
require costly backpropagation through ODE solvers, our interpolant approach
leads to a simple quadratic loss for the velocity itself which is expressed in terms
of expectations that are readily amenable to empirical estimation. The flow can be
used to generate samples from either the base or target, and to estimate the like-
lihood at any time along the interpolant. In addition, the flow can be optimized
to minimize the path length of the interpolant density, thereby paving the way for
building optimal transport maps. In situations where the base is a Gaussian den-
sity, we also show that the velocity of our normalizing flow can also be used to
construct a diffusion model to sample the target as well as estimate its score. How-
ever, our approach shows that we can bypass this diffusion completely and work
at the level of the probability flow with greater simplicity, opening an avenue for
methods based solely on ordinary differential equations as an alternative to those
based on stochastic differential equations. Benchmarking on density estimation
tasks illustrates that the learned flow can match and surpass conventional contin-
uous flows at a fraction of the cost, and compares well with diffusions on image
generation on CIFAR-10 and ImageNet 32×32. The method scales ab-initio ODE
flows to previously unreachable image resolutions, demonstrated up to 128×128.
1 I NTRODUCTION
Contemporary generative models have primarily been designed around the construction of a map
between two probability distributions that transform samples from the first into samples from the
second. While progress has been from various angles with tools such as implicit maps (Goodfellow
et al., 2014; Brock et al., 2019), and autoregressive maps (Menick & Kalchbrenner, 2019; Razavi
et al., 2019; Lee et al., 2022), we focus on the case where the map has a clear associated probability
flow. Advances in this domain, namely from flow and diffusion models, have arisen through the
introduction of algorithms or inductive biases that make learning this map, and the Jacobian of the
associated change of variables, more tractable. The challenge is to choose what structure to impose
on the transport to best reach a complex target distribution from a simple one used as base, while
maintaining computational efficiency.
In the continuous time perspective, this problem can be framed as the design of a time-dependent
map,Xt(x)witht∈[0,1], which functions as the push-forward of the base distribution at time
t= 0 onto some time-dependent distribution that reaches the target at time t= 1. Assuming that
these distributions have densities supported on Ω⊆Rd, sayρ0for the base and ρ1for the target,
this amounts to constructing Xt: Ω→Ωsuch that
ifx∼ρ0thenXt(x)∼ρtfor some density ρtsuch thatρt=0=ρ0andρt=1=ρ1. (1)
1arXiv:2209.15571v3 [cs.LG] 9 Mar 2023 |
2024.04.15.589672v1.full.pdf | DeProt: Protein language modeling with quantizied
structure and disentangled attention
Mingchen Li2,3† Yang Tan2,3† Bozitao Zhong1Ziyi Zhou1Huiqun Yu3
Xinzhu Ma2Wanli Ouyang2Liang Hong1,2∗Bingxin Zhou1∗Pan Tan1,2∗
1Shanghai Jiao Tong University, China
2Shanghai Artificial Intelligence Laboratory, China
3East China University of Science and Technology, China
{hongl3liang,bingxin.zhou,tpan1039}@sjtu.edu.cn
Abstract
Protein language models have exhibited remarkable representational capabilities in
various downstream tasks, notably in the prediction of protein functions. Despite
their success, these models traditionally grapple with a critical shortcoming: the
absence of explicit protein structure information, which is pivotal for elucidating
the relationship between protein sequences and their functionality. Addressing this
gap, we introduce DeProt, a Transformer-based protein language model designed
to incorporate protein sequences and structures. It was pre-trained on millions of
protein structures from diverse natural protein clusters. DeProt first serializes pro-
tein structures into residue-level local-structure sequences and use a graph neural
network based auto-encoder to vectorized the local structures. Then, these vectors
are quantized and formed a discrete structure tokens by a pre-trained codebook.
Meanwhile, DeProt utilize disentangled attention mechanisms to effectively in-
tegrate residue sequences with structure token sequences. Despite having fewer
parameters and less training data, DeProt significantly outperforms other state-of-
the-art (SOTA) protein language models, including those that are structure-aware
and evolution-based, particularly in the task of zero-shot mutant effect prediction
across 217 deep mutational scanning assays. Furthermore, DeProt exhibits robust
representational capabilities across a spectrum of supervised-learning downstream
tasks. Our comprehensive benchmarks underscore the innovative nature of De-
Prot’s framework and its superior performance, suggesting its wide applicability
in the realm of protein deep learning. For those interested in exploring DeProt
further, the code, model weights, and all associated datasets are accessible at:
https://github.com/ginnm/DeProt.
1 Introduction
Proteins have a myriad of diverse functions that underlie the complexity of life [ 1]. Protein Language
Models (PLMs), inspired from Natural Language Processing methods, have heralded a new era in
bioinformatics and structural biology. These models have become indispensable tools for protein
representation. They effectively capture the semantic and grammatical features inherent within
protein sequences through unsupervised training on extensive sequence datasets. As the databases
of protein sequences have grown exponentially over the past decades, data-driven approaches for
modeling proteins at scale made substantial progress. These PLMs have been trained on diverse
protein sequence databases and different unsupervised objectives. They have shown remarkable
proficiency in various protein related tasks.
∗Corresponding authors. †Equal contribution.
Preprint. Under review.. CC-BY-NC-ND 4.0 International license available under awas not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 17, 2024. ; https://doi.org/10.1101/2024.04.15.589672doi: bioRxiv preprint |
2307.09288.pdf | Llama 2 : Open Foundation and Fine-Tuned Chat Models
Hugo Touvron∗Louis Martin†Kevin Stone†
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller
Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev
Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang
Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang
Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic
Sergey Edunov Thomas Scialom∗
GenAI, Meta
Abstract
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Our fine-tuned LLMs, called Llama 2-Chat , are optimized for dialogue use cases. Our
models outperform open-source chat models on most benchmarks we tested, and based on
ourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosed-
source models. We provide a detailed description of our approach to fine-tuning and safety
improvements of Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com
†Second author
Contributions for all the authors can be found in Section A.1.arXiv:2307.09288v2 [cs.CL] 19 Jul 2023 |
2306.07629.pdf | SqueezeLLM: Dense-and-Sparse Quantization
Sehoon Kim∗
sehoonkim@berkeley.edu
UC BerkeleyColeman Hooper∗
chooper@berkeley.edu
UC BerkeleyAmir Gholami∗†
amirgh@berkeley.edu
ICSI, UC Berkeley
Zhen Dong
zhendong@berkeley.edu
UC BerkeleyXiuyu Li
xiuyu@berkeley.edu
UC BerkeleySheng Shen
sheng.s@berkeley.edu
UC Berkeley
Michael W. Mahoney
mmahoney@stat.berkeley.edu
ICSI, LBNL, UC BerkeleyKurt Keutzer
keutzer@berkeley.edu
UC Berkeley
ABSTRACT
Generative Large Language Models (LLMs) have demonstrated re-
markable results for a wide range of tasks. However, deploying
these models for inference has been a significant challenge due to
their unprecedented resource requirements. This has forced exist-
ing deployment frameworks to use multi-GPU inference pipelines,
which are often complex and costly, or to use smaller and less
performant models. In this work, we demonstrate that the main bot-
tleneck for generative inference with LLMs is memory bandwidth,
rather than compute, specifically for single batch inference. While
quantization has emerged as a promising solution by representing
weights with reduced precision, previous efforts have often resulted
in notable performance degradation. To address this, we introduce
SqueezeLLM, a post-training quantization framework that not only
enables lossless compression to ultra-low precisions of up to 3-bit,
but also achieves higher quantization performance under the same
memory constraint. Our framework incorporates two novel ideas:
(i)sensitivity-based non-uniform quantization , which searches for
the optimal bit precision assignment based on second-order in-
formation; and (ii) the Dense-and-Sparse decomposition that stores
outliers and sensitive weight values in an efficient sparse format.
When applied to the LLaMA models, our 3-bit quantization signifi-
cantly reduces the perplexity gap from the FP16 baseline by up to
2.1×as compared to the state-of-the-art methods with the same
memory requirement. Furthermore, when deployed on an A6000
GPU, our quantized models achieve up to 2.3 ×speedup compared
to the baseline. Our code is open-sourced and available online1.
1 INTRODUCTION
Recent advances in Large Language Models (LLMs), with up to
hundreds of billions of parameters and trained on massive text cor-
pora, have showcased their remarkable problem-solving capabilities
across various domains [ 4,10,16,30,51,53,57,62,64,84,84]. How-
ever, deploying these models for inference has been a significant
challenge due to their demanding resource requirements. For in-
stance, the LLaMA-65B [ 63] model requires at least 130GB of RAM
to deploy in FP16, and this exceeds current GPU capacity. Even
storing such large-sized models has become costly and complex.
∗Equal contribution
†Corresponding author
1https://github.com/SqueezeAILab/SqueezeLLMAs will be discussed in Sec. 3, the main performance bottleneck
in LLM inference for generative tasks is memory bandwidth rather
than compute. This means that the speed at which we can load
and store parameters becomes the primary latency bottleneck for
memory-bound problems, rather than arithmetic computations.
However, recent advancements in memory bandwidth technology
have been significantly slow compared to the improvements in com-
putes, leading to the phenomenon known as the Memory Wall [ 50].
Consequently, researchers have turned their attention to exploring
algorithmic methods to overcome this challenge.
One promising approach is quantization, where model parame-
ters are stored at lower precision, instead of the typical 16 or 32-bit
precision used for training. For instance, it has been demonstrated
that LLM models can be stored in 8-bit precision without incurring
performance degradation [ 75], where 8-bit quantization not only
reduces the storage requirements by half but also has the potential
to improve inference latency and throughput. As a result, there
has been significant research interest in quantizing models to even
lower precisions. A pioneering approach is GPTQ [ 19] which uses
a training-free quantization technique that achieves near-lossless
4-bit quantization for large LLM models with over tens of billions
of parameters. However, achieving high quantization performance
remains challenging, particularly with lower bit precision and for
relatively smaller models (e.g., <50B parameters) such as the recent
LLaMA [64] or its instruction-following variants [8, 22, 61].
In this paper, we conduct an extensive study of low-bit precision
quantization and identify limitations in existing approaches. Build-
ing upon these insights, we propose a novel solution that achieves
lossless compression and improved quantization performance for
models of the same size, even at precisions as low as 3 bits.
Contributions. We start by presenting performance modeling re-
sults demonstrating that the memory, rather than the compute, is
the primary bottleneck in LLM inference with generative tasks.
Building on this insight, we introduce SqueezeLLM, a post-training
quantization framework that incorporates a novel sensitivity-based
non-uniform quantization technique and a Dense-and-Sparse decom-
position method. These techniques enable ultra-low-bit precision
quantization, while maintaining competitive model performance,
significantly reducing the model sizes and inference time costs. In
more detail, our main contributions can be summarized as follows:arXiv:2306.07629v1 [cs.CL] 13 Jun 2023 |
2205.15317.pdf | Chefs’ Random Tables: Non-Trigonometric Random
Features
Valerii Likhosherstov*
University of Cambridge
vl304@cam.ac.ukKrzysztof Choromanski*
Google Research & Columbia University
kchoro@google.comAvinava Dubey*
Google Research
Frederick Liu*
Google ResearchTamas Sarlos
Google ResearchAdrian Weller
University of Cambridge &
The Alan Turing Institute
Abstract
We introduce chefs’ random tables (CRTs), a new class of non-trigonometric
random features (RFs) to approximate Gaussian and softmax-kernels. CRTs are
an alternative to standard random kitchen sink (RKS) methods, which inherently
rely on the trigonometric maps [ 41]. We present variants of CRTs where RFs are
positive, a key requirement for applications in recent low-rank Transformers [ 13].
Further variance reduction is possible by leveraging statistics which are simple to
compute. One instantiation of CRTs, the optimal positive random features (OPRFs),
is to our knowledge the first RF method for unbiased softmax-kernel estimation
with positive and bounded RFs, resulting in exponentially small tails and much
lower variance than its counterparts. As we show, orthogonal random features
applied in OPRFs provide additional variance reduction for any dimensionality d
(not only asymptotically for sufficiently large d, as for RKS). We test CRTs on
many tasks ranging from non-parametric classification to training Transformers for
text, speech and image data, obtaining new state-of-the-art results for low-rank text
Transformers, while providing linear space and time complexity of the attention.
1 Introduction & related work
The idea that nonlinear mappings of the random-weight linear combinations of data features can
be used to linearize various nonlinear similarity functions transformed kernel methods. This led to
the development of Random Kitchen Sinks (RKSs) techniques; and the new field of scalable kernel
algorithms, introduced in the paper trilogy [ 39,40,41], was born. RKSs were subsequently used
in many applications, ranging from kernel and function-to-function regression [ 1,30,37], SVM
algorithms [ 45] to operator-valued and semigroup kernels [ 33,52], neural networks [ 23,51,9,25]
and even differentially-private ML algorithms [ 8], as well as (very recently) nonparametric adaptive
control [3]. Random features (RFs) are a subject of much theoretical analysis [31, 53, 46, 43].
To approximate shift invariant (e.g. Gaussian, Cauchy or Laplace) and softmax kernels, RKSs rely on
the trigonometric nonlinear mappings provided directly by Bochner’s Theorem [ 33]. Trigonometric
RFs provide strong concentration results (e.g. uniform convergence, see Claim 1 in [ 40]), but suffer
from a weakness that was noted recently – they are not guaranteed to be positive. This makes them
unsuitable for approximating softmax-attention in scalable Transformers relying on implicit attention
via random features [ 13]. As noted in [ 13], trigonometric features lead to unstable training, as they
yield poor approximations of the partition functions applied to renormalize attention and involving
* Equal contribution
Preprint. Under review.arXiv:2205.15317v1 [cs.LG] 30 May 2022 |