date stringdate 2023-05-04 00:00:00 2025-08-27 00:00:00 | arxiv_id stringlengths 10 10 | votes int32 0 110M | title stringlengths 8 206 | abstract stringlengths 165 1.92k | url stringlengths 40 40 |
|---|---|---|---|---|---|
2023-05-16 | 2305.09515 | 3 | AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation | Diffusion models have gained significant attention in the realm of image
generation due to their exceptional performance. Their success has been
recently expanded to text generation via generating all tokens within a
sequence concurrently. However, natural language exhibits a far more pronounced
sequential dependency in comparison to images, and the majority of existing
language models are trained with a left-to-right auto-regressive approach. To
account for the inherent sequential characteristic of natural language, we
introduce Auto-Regressive Diffusion (AR-Diffusion). AR-Diffusion ensures that
the generation of tokens on the right depends on the generated ones on the
left, a mechanism achieved through employing a dynamic number of denoising
steps that vary based on token position. This results in tokens on the left
undergoing fewer denoising steps than those on the right, thereby enabling them
to generate earlier and subsequently influence the generation of tokens on the
right. In a series of experiments on various text generation tasks, including
text summarization, machine translation, and common sense generation,
AR-Diffusion clearly demonstrated its superiority over existing diffusion
language models and that it can be $100\times\sim600\times$ faster when
achieving comparable results. Our code is available at
https://github.com/microsoft/ProphetNet/tree/master/AR-diffusion. | https://huggingface.co/papers/2305.09515 |
2023-05-16 | 2305.08298 | 3 | Symbol tuning improves in-context learning in language models | We present symbol tuning - finetuning language models on in-context
input-label pairs where natural language labels (e.g., "positive/negative
sentiment") are replaced with arbitrary symbols (e.g., "foo/bar"). Symbol
tuning leverages the intuition that when a model cannot use instructions or
natural language labels to figure out a task, it must instead do so by learning
the input-label mappings.
We experiment with symbol tuning across Flan-PaLM models up to 540B
parameters and observe benefits across various settings. First, symbol tuning
boosts performance on unseen in-context learning tasks and is much more robust
to underspecified prompts, such as those without instructions or without
natural language labels. Second, symbol-tuned models are much stronger at
algorithmic reasoning tasks, with up to 18.2% better performance on the List
Functions benchmark and up to 15.3% better performance on the Simple Turing
Concepts benchmark. Finally, symbol-tuned models show large improvements in
following flipped-labels presented in-context, meaning that they are more
capable of using in-context information to override prior semantic knowledge. | https://huggingface.co/papers/2305.08298 |
2023-05-16 | 2305.07961 | 3 | Leveraging Large Language Models in Conversational Recommender Systems | A Conversational Recommender System (CRS) offers increased transparency and
control to users by enabling them to engage with the system through a real-time
multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an
unprecedented ability to converse naturally and incorporate world knowledge and
common-sense reasoning into language understanding, unlocking the potential of
this paradigm. However, effectively leveraging LLMs within a CRS introduces new
technical challenges, including properly understanding and controlling a
complex conversation and retrieving from external sources of information. These
issues are exacerbated by a large, evolving item corpus and a lack of
conversational data for training. In this paper, we provide a roadmap for
building an end-to-end large-scale CRS using LLMs. In particular, we propose
new implementations for user preference understanding, flexible dialogue
management and explainable recommendations as part of an integrated
architecture powered by LLMs. For improved personalization, we describe how an
LLM can consume interpretable natural language user profiles and use them to
modulate session-level context. To overcome conversational data limitations in
the absence of an existing production CRS, we propose techniques for building a
controllable LLM-based user simulator to generate synthetic conversations. As a
proof of concept we introduce RecLLM, a large-scale CRS for YouTube videos
built on LaMDA, and demonstrate its fluency and diverse functionality through
some illustrative example conversations. | https://huggingface.co/papers/2305.07961 |
2023-05-16 | 2305.09137 | 2 | Pre-Training to Learn in Context | In-context learning, where pre-trained language models learn to perform tasks
from task examples and instructions in their contexts, has attracted much
attention in the NLP community. However, the ability of in-context learning is
not fully exploited because language models are not explicitly trained to learn
in context. To this end, we propose PICL (Pre-training for In-Context
Learning), a framework to enhance the language models' in-context learning
ability by pre-training the model on a large collection of "intrinsic tasks" in
the general plain-text corpus using the simple language modeling objective.
PICL encourages the model to infer and perform tasks by conditioning on the
contexts while maintaining task generalization of pre-trained models. We
evaluate the in-context learning performance of the model trained with PICL on
seven widely-used text classification datasets and the Super-NaturalInstrctions
benchmark, which contains 100+ NLP tasks formulated to text generation. Our
experiments show that PICL is more effective and task-generalizable than a
range of baselines, outperforming larger language models with nearly 4x
parameters. The code is publicly available at https://github.com/thu-coai/PICL. | https://huggingface.co/papers/2305.09137 |
2023-05-16 | 2305.08810 | 2 | AutoRecon: Automated 3D Object Discovery and Reconstruction | A fully automated object reconstruction pipeline is crucial for digital
content creation. While the area of 3D reconstruction has witnessed profound
developments, the removal of background to obtain a clean object model still
relies on different forms of manual labor, such as bounding box labeling, mask
annotations, and mesh manipulations. In this paper, we propose a novel
framework named AutoRecon for the automated discovery and reconstruction of an
object from multi-view images. We demonstrate that foreground objects can be
robustly located and segmented from SfM point clouds by leveraging
self-supervised 2D vision transformer features. Then, we reconstruct decomposed
neural scene representations with dense supervision provided by the decomposed
point clouds, resulting in accurate object reconstruction and segmentation.
Experiments on the DTU, BlendedMVS and CO3D-V2 datasets demonstrate the
effectiveness and robustness of AutoRecon. | https://huggingface.co/papers/2305.08810 |
2023-05-16 | 2305.08809 | 2 | Interpretability at Scale: Identifying Causal Mechanisms in Alpaca | Obtaining human-interpretable explanations of large, general-purpose language
models is an urgent goal for AI safety. However, it is just as important that
our interpretability methods are faithful to the causal dynamics underlying
model behavior and able to robustly generalize to unseen inputs. Distributed
Alignment Search (DAS) is a powerful gradient descent method grounded in a
theory of causal abstraction that uncovered perfect alignments between
interpretable symbolic algorithms and small deep learning models fine-tuned for
specific tasks. In the present paper, we scale DAS significantly by replacing
the remaining brute-force search steps with learned parameters -- an approach
we call DAS. This enables us to efficiently search for interpretable causal
structure in large language models while they follow instructions. We apply DAS
to the Alpaca model (7B parameters), which, off the shelf, solves a simple
numerical reasoning problem. With DAS, we discover that Alpaca does this by
implementing a causal model with two interpretable boolean variables.
Furthermore, we find that the alignment of neural representations with these
variables is robust to changes in inputs and instructions. These findings mark
a first step toward deeply understanding the inner-workings of our largest and
most widely deployed language models. | https://huggingface.co/papers/2305.08809 |
2023-05-16 | 2305.08677 | 2 | Natural Language Decomposition and Interpretation of Complex Utterances | Designing natural language interfaces has historically required collecting
supervised data to translate user requests into carefully designed intent
representations. This requires enumerating and labeling a long tail of user
requests, which is challenging. At the same time, large language models (LLMs)
encode knowledge about goals and plans that can help conversational assistants
interpret user requests requiring numerous steps to complete. We introduce an
approach to handle complex-intent-bearing utterances from a user via a process
of hierarchical natural language decomposition and interpretation. Our approach
uses a pre-trained language model to decompose a complex utterance into a
sequence of simpler natural language steps and interprets each step using the
language-to-program model designed for the interface. To test our approach, we
collect and release DeCU -- a new NL-to-program benchmark to evaluate
Decomposition of Complex Utterances. Experiments show that the proposed
approach enables the interpretation of complex utterances with almost no
complex training data, while outperforming standard few-shot prompting
approaches. | https://huggingface.co/papers/2305.08677 |
2023-05-16 | 2305.08675 | 2 | Improved baselines for vision-language pre-training | Contrastive learning has emerged as an efficient framework to learn
multimodal representations. CLIP, a seminal work in this area, achieved
impressive results by training on paired image-text data using the contrastive
loss. Recent work claims improvements over CLIP using additional
non-contrastive losses inspired from self-supervised learning. However, it is
sometimes hard to disentangle the contribution of these additional losses from
other implementation details, e.g., data augmentation or regularization
techniques, used to train the model. To shed light on this matter, in this
paper, we first propose, implement and evaluate several baselines obtained by
combining contrastive learning with recent advances in self-supervised
learning. In particular, we use the loss functions that were proven successful
for visual self-supervised learning to align image and text modalities. We find
that these baselines outperform a basic implementation of CLIP. However, when a
stronger training recipe is employed, the advantage disappears. Indeed, we find
that a simple CLIP baseline can also be improved substantially, up to a 25%
relative improvement on downstream zero-shot tasks, by using well-known
training techniques that are popular in other subfields. Moreover, we discover
that it is enough to apply image and text augmentations to make up for most of
the improvement attained by prior works. With our improved training recipe for
CLIP, we obtain state-of-the-art performance on four standard datasets, and
consistently outperform prior work (up to +4% on the largest dataset), while
being substantially simpler. | https://huggingface.co/papers/2305.08675 |
2023-05-16 | 2305.08275 | 2 | ULIP-2: Towards Scalable Multimodal Pre-training For 3D Understanding | Recent advancements in multimodal pre-training methods have shown promising
efficacy in 3D representation learning by aligning features across 3D modality,
their 2D counterpart modality, and corresponding language modality. However,
the methods used by existing multimodal pre-training frameworks to gather
multimodal data for 3D applications lack scalability and comprehensiveness,
potentially constraining the full potential of multimodal learning. The main
bottleneck lies in the language modality's scalability and comprehensiveness.
To address this bottleneck, we introduce ULIP-2, a multimodal pre-training
framework that leverages state-of-the-art multimodal large language models
(LLMs) pre-trained on extensive knowledge to automatically generate holistic
language counterparts for 3D objects. We conduct experiments on two large-scale
datasets, Objaverse and ShapeNet55, and release our generated three-modality
triplet datasets (3D Point Cloud - Image - Language), named "ULIP-Objaverse
Triplets" and "ULIP-ShapeNet Triplets". ULIP-2 requires only 3D data itself and
eliminates the need for any manual annotation effort, demonstrating its
scalability; and ULIP-2 achieves remarkable improvements on downstream
zero-shot classification on ModelNet40 (74% Top1 Accuracy). Moreover, ULIP-2
sets a new record on the real-world ScanObjectNN benchmark (91.5% Overall
Accuracy) while utilizing only 1.4 million parameters(~10x fewer than current
SOTA), signifying a breakthrough in scalable multimodal 3D representation
learning without human annotations. The code and datasets are available at
https://github.com/salesforce/ULIP. | https://huggingface.co/papers/2305.08275 |
2023-05-16 | 2305.07804 | 2 | Dr. LLaMA: Improving Small Language Models in Domain-Specific QA via
Generative Data Augmentation | Large Language Models (LLMs) have made remarkable advancements in the field
of natural language processing. However, their increasing size poses challenges
in terms of computational cost. On the other hand, Small Language Models (SLMs)
are known for their efficiency, but they often struggle with limited capacity
and training data, especially in specific domains. In this paper, we introduce
a novel method aimed at improving SLMs in the medical domain using LLM-based
generative data augmentation. The objective of our approach is to develop more
efficient and capable models that are specifically tailored for specialized
applications. Through experiments conducted on the PubMedQA dataset, we
demonstrate the effectiveness of LLMs in refining and diversifying existing
question-answer pairs. This refinement process leads to improved performance in
a significantly smaller model after fine-tuning. Notably, our best SLM, with
under 1.6 billion parameters, outperforms the few-shot GPT-4 on the PubMedQA
dataset. Our code and generated data are publicly available to facilitate
further explorations. | https://huggingface.co/papers/2305.07804 |
2023-05-16 | 2305.07677 | 2 | Masked Audio Text Encoders are Effective Multi-Modal Rescorers | Masked Language Models (MLMs) have proven to be effective for second-pass
rescoring in Automatic Speech Recognition (ASR) systems. In this work, we
propose Masked Audio Text Encoder (MATE), a multi-modal masked language model
rescorer which incorporates acoustic representations into the input space of
MLM. We adopt contrastive learning for effectively aligning the modalities by
learning shared representations. We show that using a multi-modal rescorer is
beneficial for domain generalization of the ASR system when target domain data
is unavailable. MATE reduces word error rate (WER) by 4%-16% on in-domain, and
3%-7% on out-of-domain datasets, over the text-only baseline. Additionally,
with very limited amount of training data (0.8 hours), MATE achieves a WER
reduction of 8%-23% over the first-pass baseline. | https://huggingface.co/papers/2305.07677 |
2023-05-16 | 2305.09148 | 1 | Dual-Alignment Pre-training for Cross-lingual Sentence Embedding | Recent studies have shown that dual encoder models trained with the
sentence-level translation ranking task are effective methods for cross-lingual
sentence embedding. However, our research indicates that token-level alignment
is also crucial in multilingual scenarios, which has not been fully explored
previously. Based on our findings, we propose a dual-alignment pre-training
(DAP) framework for cross-lingual sentence embedding that incorporates both
sentence-level and token-level alignment. To achieve this, we introduce a novel
representation translation learning (RTL) task, where the model learns to use
one-side contextualized token representation to reconstruct its translation
counterpart. This reconstruction objective encourages the model to embed
translation information into the token representation. Compared to other
token-level alignment methods such as translation language modeling, RTL is
more suitable for dual encoder architectures and is computationally efficient.
Extensive experiments on three sentence-level cross-lingual benchmarks
demonstrate that our approach can significantly improve sentence embedding. Our
code is available at https://github.com/ChillingDream/DAP. | https://huggingface.co/papers/2305.09148 |
2023-05-16 | 2305.08844 | 1 | RL4F: Generating Natural Language Feedback with Reinforcement Learning
for Repairing Model Outputs | Despite their unprecedented success, even the largest language models make
mistakes. Similar to how humans learn and improve using feedback, previous work
proposed providing language models with natural language feedback to guide them
in repairing their outputs. Because human-generated critiques are expensive to
obtain, researchers have devised learned critique generators in lieu of human
critics while assuming one can train downstream models to utilize generated
feedback. However, this approach does not apply to black-box or limited access
models such as ChatGPT, as they cannot be fine-tuned. Moreover, in the era of
large general-purpose language agents, fine-tuning is neither computationally
nor spatially efficient as it results in multiple copies of the network. In
this work, we introduce RL4F (Reinforcement Learning for Feedback), a
multi-agent collaborative framework where the critique generator is trained to
maximize end-task performance of GPT-3, a fixed model more than 200 times its
size. RL4F produces critiques that help GPT-3 revise its outputs. We study
three datasets for action planning, summarization and alphabetization and show
improvements (~5% on average) in multiple text similarity metrics over strong
baselines across all three tasks. | https://huggingface.co/papers/2305.08844 |
2023-05-16 | 2305.07969 | 1 | GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content | This paper presents a novel approach for detecting ChatGPT-generated vs.
human-written text using language models. To this end, we first collected and
released a pre-processed dataset named OpenGPTText, which consists of rephrased
content generated using ChatGPT. We then designed, implemented, and trained two
different models for text classification, using Robustly Optimized BERT
Pretraining Approach (RoBERTa) and Text-to-Text Transfer Transformer (T5),
respectively. Our models achieved remarkable results, with an accuracy of over
97% on the test dataset, as evaluated through various metrics. Furthermore, we
conducted an interpretability study to showcase our model's ability to extract
and differentiate key features between human-written and ChatGPT-generated
text. Our findings provide important insights into the effective use of
language models to detect generated text. | https://huggingface.co/papers/2305.07969 |
2023-05-17 | 2305.08891 | 11 | Common Diffusion Noise Schedules and Sample Steps are Flawed | We discover that common diffusion noise schedules do not enforce the last
timestep to have zero signal-to-noise ratio (SNR), and some implementations of
diffusion samplers do not start from the last timestep. Such designs are flawed
and do not reflect the fact that the model is given pure Gaussian noise at
inference, creating a discrepancy between training and inference. We show that
the flawed design causes real problems in existing implementations. In Stable
Diffusion, it severely limits the model to only generate images with medium
brightness and prevents it from generating very bright and dark samples. We
propose a few simple fixes: (1) rescale the noise schedule to enforce zero
terminal SNR; (2) train the model with v prediction; (3) change the sampler to
always start from the last timestep; (4) rescale classifier-free guidance to
prevent over-exposure. These simple changes ensure the diffusion process is
congruent between training and inference and allow the model to generate
samples more faithful to the original data distribution. | https://huggingface.co/papers/2305.08891 |
2023-05-17 | 2305.09641 | 3 | FitMe: Deep Photorealistic 3D Morphable Model Avatars | In this paper, we introduce FitMe, a facial reflectance model and a
differentiable rendering optimization pipeline, that can be used to acquire
high-fidelity renderable human avatars from single or multiple images. The
model consists of a multi-modal style-based generator, that captures facial
appearance in terms of diffuse and specular reflectance, and a PCA-based shape
model. We employ a fast differentiable rendering process that can be used in an
optimization pipeline, while also achieving photorealistic facial shading. Our
optimization process accurately captures both the facial reflectance and shape
in high-detail, by exploiting the expressivity of the style-based latent
representation and of our shape model. FitMe achieves state-of-the-art
reflectance acquisition and identity preservation on single "in-the-wild"
facial images, while it produces impressive scan-like results, when given
multiple unconstrained facial images pertaining to the same identity. In
contrast with recent implicit avatar reconstructions, FitMe requires only one
minute and produces relightable mesh and texture-based avatars, that can be
used by end-user applications. | https://huggingface.co/papers/2305.09641 |
2023-05-17 | 2305.10431 | 2 | FastComposer: Tuning-Free Multi-Subject Image Generation with Localized
Attention | Diffusion models excel at text-to-image generation, especially in
subject-driven generation for personalized images. However, existing methods
are inefficient due to the subject-specific fine-tuning, which is
computationally intensive and hampers efficient deployment. Moreover, existing
methods struggle with multi-subject generation as they often blend features
among subjects. We present FastComposer which enables efficient, personalized,
multi-subject text-to-image generation without fine-tuning. FastComposer uses
subject embeddings extracted by an image encoder to augment the generic text
conditioning in diffusion models, enabling personalized image generation based
on subject images and textual instructions with only forward passes. To address
the identity blending problem in the multi-subject generation, FastComposer
proposes cross-attention localization supervision during training, enforcing
the attention of reference subjects localized to the correct regions in the
target images. Naively conditioning on subject embeddings results in subject
overfitting. FastComposer proposes delayed subject conditioning in the
denoising step to maintain both identity and editability in subject-driven
image generation. FastComposer generates images of multiple unseen individuals
with different styles, actions, and contexts. It achieves
300times-2500times speedup compared to fine-tuning-based methods and
requires zero extra storage for new subjects. FastComposer paves the way for
efficient, personalized, and high-quality multi-subject image creation. Code,
model, and dataset are available at
https://github.com/mit-han-lab/fastcomposer. | https://huggingface.co/papers/2305.10431 |
2023-05-17 | 2305.10400 | 2 | What You See is What You Read? Improving Text-Image Alignment Evaluation | Automatically determining whether a text and a corresponding image are
semantically aligned is a significant challenge for vision-language models,
with applications in generative text-to-image and image-to-text tasks. In this
work, we study methods for automatic text-image alignment evaluation. We first
introduce SeeTRUE: a comprehensive evaluation set, spanning multiple datasets
from both text-to-image and image-to-text generation tasks, with human
judgements for whether a given text-image pair is semantically aligned. We then
describe two automatic methods to determine alignment: the first involving a
pipeline based on question generation and visual question answering models, and
the second employing an end-to-end classification approach by finetuning
multimodal pretrained models. Both methods surpass prior approaches in various
text-image alignment tasks, with significant improvements in challenging cases
that involve complex composition or unnatural images. Finally, we demonstrate
how our approaches can localize specific misalignments between an image and a
given text, and how they can be used to automatically re-rank candidates in
text-to-image generation. | https://huggingface.co/papers/2305.10400 |
2023-05-17 | 2305.09664 | 2 | Understanding 3D Object Interaction from a Single Image | Humans can easily understand a single image as depicting multiple potential
objects permitting interaction. We use this skill to plan our interactions with
the world and accelerate understanding new objects without engaging in
interaction. In this paper, we would like to endow machines with the similar
ability, so that intelligent agents can better explore the 3D scene or
manipulate objects. Our approach is a transformer-based model that predicts the
3D location, physical properties and affordance of objects. To power this
model, we collect a dataset with Internet videos, egocentric videos and indoor
images to train and validate our approach. Our model yields strong performance
on our data, and generalizes well to robotics data. | https://huggingface.co/papers/2305.09664 |
2023-05-17 | 2305.09253 | 2 | Online Continual Learning Without the Storage Constraint | Online continual learning (OCL) research has primarily focused on mitigating
catastrophic forgetting with fixed and limited storage allocation throughout
the agent's lifetime. However, the growing affordability of data storage
highlights a broad range of applications that do not adhere to these
assumptions. In these cases, the primary concern lies in managing computational
expenditures rather than storage. In this paper, we target such settings,
investigating the online continual learning problem by relaxing storage
constraints and emphasizing fixed, limited economical budget. We provide a
simple algorithm that can compactly store and utilize the entirety of the
incoming data stream under tiny computational budgets using a kNN classifier
and universal pre-trained feature extractors. Our algorithm provides a
consistency property attractive to continual learning: It will never forget
past seen data. We set a new state of the art on two large-scale OCL datasets:
Continual LOCalization (CLOC), which has 39M images over 712 classes, and
Continual Google Landmarks V2 (CGLM), which has 580K images over 10,788 classes
-- beating methods under far higher computational budgets than ours in terms of
both reducing catastrophic forgetting of past data and quickly adapting to
rapidly changing data streams. We provide code to reproduce our results at
https://github.com/drimpossible/ACM. | https://huggingface.co/papers/2305.09253 |
2023-05-18 | 2305.10403 | 7 | PaLM 2 Technical Report | We introduce PaLM 2, a new state-of-the-art language model that has better
multilingual and reasoning capabilities and is more compute-efficient than its
predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture
of objectives. Through extensive evaluations on English and multilingual
language, and reasoning tasks, we demonstrate that PaLM 2 has significantly
improved quality on downstream tasks across different model sizes, while
simultaneously exhibiting faster and more efficient inference compared to PaLM.
This improved efficiency enables broader deployment while also allowing the
model to respond faster, for a more natural pace of interaction. PaLM 2
demonstrates robust reasoning capabilities exemplified by large improvements
over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable
performance on a suite of responsible AI evaluations, and enables
inference-time control over toxicity without additional overhead or impact on
other capabilities. Overall, PaLM 2 achieves state-of-the-art performance
across a diverse set of tasks and capabilities.
When discussing the PaLM 2 family, it is important to distinguish between
pre-trained models (of various sizes), fine-tuned variants of these models, and
the user-facing products that use these models. In particular, user-facing
products typically include additional pre- and post-processing steps.
Additionally, the underlying models may evolve over time. Therefore, one should
not expect the performance of user-facing products to exactly match the results
reported in this report. | https://huggingface.co/papers/2305.10403 |
2023-05-18 | 2305.09857 | 7 | CoEdIT: Text Editing by Task-Specific Instruction Tuning | Text editing or revision is an essential function of the human writing
process. Understanding the capabilities of LLMs for making high-quality
revisions and collaborating with human writers is a critical step toward
building effective writing assistants. With the prior success of LLMs and
instruction tuning, we leverage instruction-tuned LLMs for text revision to
improve the quality of user-generated text and improve the efficiency of the
process. We introduce CoEdIT, a state-of-the-art text editing model for writing
assistance. CoEdIT takes instructions from the user specifying the attributes
of the desired text, such as "Make the sentence simpler" or "Write it in a more
neutral style," and outputs the edited text. We present a large language model
fine-tuned on a diverse collection of task-specific instructions for text
editing (a total of 82K instructions). Our model (1) achieves state-of-the-art
performance on various text editing benchmarks, (2) is competitive with
publicly available largest-sized LLMs trained on instructions while being
sim60x smaller, (3) is capable of generalizing to unseen edit instructions,
and (4) exhibits compositional comprehension abilities to generalize to
instructions containing different combinations of edit actions. Through
extensive qualitative and quantitative analysis, we show that writers prefer
the edits suggested by CoEdIT, relative to other state-of-the-art text editing
models. Our code and dataset are publicly available. | https://huggingface.co/papers/2305.09857 |
2023-05-18 | 2305.10425 | 5 | SLiC-HF: Sequence Likelihood Calibration with Human Feedback | Learning from human feedback has been shown to be effective at aligning
language models with human preferences. Past work has often relied on
Reinforcement Learning from Human Feedback (RLHF), which optimizes the language
model using reward scores assigned from a reward model trained on human
preference data. In this work we show how the recently introduced Sequence
Likelihood Calibration (SLiC), can also be used to effectively learn from human
preferences (SLiC-HF). Furthermore, we demonstrate this can be done with human
feedback data collected for a different model, similar to off-policy, offline
RL data. Automatic and human evaluation experiments on the TL;DR summarization
task show that SLiC-HF significantly improves supervised fine-tuning baselines.
Furthermore, SLiC-HF presents a competitive alternative to the PPO RLHF
implementation used in past work while being much simpler to implement, easier
to tune and more computationally efficient in practice. | https://huggingface.co/papers/2305.10425 |
2023-05-18 | 2305.10429 | 3 | DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining | The mixture proportions of pretraining data domains (e.g., Wikipedia, books,
web text) greatly affect language model (LM) performance. In this paper, we
propose Domain Reweighting with Minimax Optimization (DoReMi), which first
trains a small proxy model using group distributionally robust optimization
(Group DRO) over domains to produce domain weights (mixture proportions)
without knowledge of downstream tasks. We then resample a dataset with these
domain weights and train a larger, full-sized model. In our experiments, we use
DoReMi on a 280M-parameter proxy model to find domain weights for training an
8B-parameter model (30x larger) more efficiently. On The Pile, DoReMi improves
perplexity across all domains, even when it downweights a domain. DoReMi
improves average few-shot downstream accuracy by 6.5% over a baseline model
trained using The Pile's default domain weights and reaches the baseline
accuracy with 2.6x fewer training steps. On the GLaM dataset, DoReMi, which has
no knowledge of downstream tasks, even matches the performance of using domain
weights tuned on downstream tasks. | https://huggingface.co/papers/2305.10429 |
2023-05-18 | 2305.10005 | 3 | DinoSR: Self-Distillation and Online Clustering for Self-supervised
Speech Representation Learning | In this paper, we introduce self-distillation and online clustering for
self-supervised speech representation learning (DinoSR) which combines masked
language modeling, self-distillation, and online clustering. We show that these
concepts complement each other and result in a strong representation learning
model for speech. DinoSR first extracts contextualized embeddings from the
input audio with a teacher network, then runs an online clustering system on
the embeddings to yield a machine-discovered phone inventory, and finally uses
the discretized tokens to guide a student network. We show that DinoSR
surpasses previous state-of-the-art performance in several downstream tasks,
and provide a detailed analysis of the model and the learned discrete units.
The source code will be made available after the anonymity period. | https://huggingface.co/papers/2305.10005 |
2023-05-18 | 2305.09975 | 2 | Smart Word Suggestions for Writing Assistance | Enhancing word usage is a desired feature for writing assistance. To further
advance research in this area, this paper introduces "Smart Word Suggestions"
(SWS) task and benchmark. Unlike other works, SWS emphasizes end-to-end
evaluation and presents a more realistic writing assistance scenario. This task
involves identifying words or phrases that require improvement and providing
substitution suggestions. The benchmark includes human-labeled data for
testing, a large distantly supervised dataset for training, and the framework
for evaluation. The test data includes 1,000 sentences written by English
learners, accompanied by over 16,000 substitution suggestions annotated by 10
native speakers. The training dataset comprises over 3.7 million sentences and
12.7 million suggestions generated through rules. Our experiments with seven
baselines demonstrate that SWS is a challenging task. Based on experimental
analysis, we suggest potential directions for future research on SWS. The
dataset and related codes is available at
https://github.com/microsoft/SmartWordSuggestions. | https://huggingface.co/papers/2305.09975 |
2023-05-18 | 2305.09863 | 2 | Explaining black box text modules in natural language with language
models | Large language models (LLMs) have demonstrated remarkable prediction
performance for a growing array of tasks. However, their rapid proliferation
and increasing opaqueness have created a growing need for interpretability.
Here, we ask whether we can automatically obtain natural language explanations
for black box text modules. A "text module" is any function that maps text to a
scalar continuous value, such as a submodule within an LLM or a fitted model of
a brain region. "Black box" indicates that we only have access to the module's
inputs/outputs.
We introduce Summarize and Score (SASC), a method that takes in a text module
and returns a natural language explanation of the module's selectivity along
with a score for how reliable the explanation is. We study SASC in 3 contexts.
First, we evaluate SASC on synthetic modules and find that it often recovers
ground truth explanations. Second, we use SASC to explain modules found within
a pre-trained BERT model, enabling inspection of the model's internals.
Finally, we show that SASC can generate explanations for the response of
individual fMRI voxels to language stimuli, with potential applications to
fine-grained brain mapping. All code for using SASC and reproducing results is
made available on Github. | https://huggingface.co/papers/2305.09863 |
2023-05-18 | 2305.09764 | 2 | Application-Agnostic Language Modeling for On-Device ASR | On-device automatic speech recognition systems face several challenges
compared to server-based systems. They have to meet stricter constraints in
terms of speed, disk size and memory while maintaining the same accuracy. Often
they have to serve several applications with different distributions at once,
such as communicating with a virtual assistant and speech-to-text. The simplest
solution to serve multiple applications is to build application-specific
(language) models, but this leads to an increase in memory. Therefore, we
explore different data- and architecture-driven language modeling approaches to
build a single application-agnostic model. We propose two novel feed-forward
architectures that find an optimal trade off between different on-device
constraints. In comparison to the application-specific solution, one of our
novel approaches reduces the disk size by half, while maintaining speed and
accuracy of the original model. | https://huggingface.co/papers/2305.09764 |
2023-05-18 | 2305.10320 | 1 | CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo | The core of Multi-view Stereo(MVS) is the matching process among reference
and source pixels. Cost aggregation plays a significant role in this process,
while previous methods focus on handling it via CNNs. This may inherit the
natural limitation of CNNs that fail to discriminate repetitive or incorrect
matches due to limited local receptive fields. To handle the issue, we aim to
involve Transformer into cost aggregation. However, another problem may occur
due to the quadratically growing computational complexity caused by
Transformer, resulting in memory overflow and inference latency. In this paper,
we overcome these limits with an efficient Transformer-based cost aggregation
network, namely CostFormer. The Residual Depth-Aware Cost Transformer(RDACT) is
proposed to aggregate long-range features on cost volume via self-attention
mechanisms along the depth and spatial dimensions. Furthermore, Residual
Regression Transformer(RRT) is proposed to enhance spatial attention. The
proposed method is a universal plug-in to improve learning-based MVS methods. | https://huggingface.co/papers/2305.10320 |
2023-05-18 | 2305.10266 | 1 | Searching for Needles in a Haystack: On the Role of Incidental
Bilingualism in PaLM's Translation Capability | Large, multilingual language models exhibit surprisingly good zero- or
few-shot machine translation capabilities, despite having never seen the
intentionally-included translation examples provided to typical neural
translation systems. We investigate the role of incidental bilingualism -- the
unintentional consumption of bilingual signals, including translation examples
-- in explaining the translation capabilities of large language models, taking
the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method
approach to measure and understand incidental bilingualism at scale. We show
that PaLM is exposed to over 30 million translation pairs across at least 44
languages. Furthermore, the amount of incidental bilingual content is highly
correlated with the amount of monolingual in-language content for non-English
languages. We relate incidental bilingual content to zero-shot prompts and show
that it can be used to mine new prompts to improve PaLM's out-of-English
zero-shot translation quality. Finally, in a series of small-scale ablations,
we show that its presence has a substantial impact on translation capabilities,
although this impact diminishes with model scale. | https://huggingface.co/papers/2305.10266 |
2023-05-18 | 2305.10142 | 1 | Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback | We study whether multiple large language models (LLMs) can autonomously
improve each other in a negotiation game by playing, reflecting, and
criticizing. We are interested in this question because if LLMs were able to
improve each other, it would imply the possibility of creating strong AI agents
with minimal human intervention. We ask two LLMs to negotiate with each other,
playing the roles of a buyer and a seller, respectively. They aim to reach a
deal with the buyer targeting a lower price and the seller a higher one. A
third language model, playing the critic, provides feedback to a player to
improve the player's negotiation strategies. We let the two agents play
multiple rounds, using previous negotiation history and AI feedback as
in-context demonstrations to improve the model's negotiation strategy
iteratively. We use different LLMs (GPT and Claude) for different roles and use
the deal price as the evaluation metric. Our experiments reveal multiple
intriguing findings: (1) Only a subset of the language models we consider can
self-play and improve the deal price from AI feedback, weaker models either do
not understand the game's rules or cannot incorporate AI feedback for further
improvement. (2) Models' abilities to learn from the feedback differ when
playing different roles. For example, it is harder for Claude-instant to
improve as the buyer than as the seller. (3) When unrolling the game to
multiple rounds, stronger agents can consistently improve their performance by
meaningfully using previous experiences and iterative AI feedback, yet have a
higher risk of breaking the deal. We hope our work provides insightful initial
explorations of having models autonomously improve each other with game playing
and AI feedback. | https://huggingface.co/papers/2305.10142 |
2023-05-18 | 2305.10018 | 1 | Transfer Learning for Fine-grained Classification Using Semi-supervised
Learning and Visual Transformers | Fine-grained classification is a challenging task that involves identifying
subtle differences between objects within the same category. This task is
particularly challenging in scenarios where data is scarce. Visual transformers
(ViT) have recently emerged as a powerful tool for image classification, due to
their ability to learn highly expressive representations of visual data using
self-attention mechanisms. In this work, we explore Semi-ViT, a ViT model fine
tuned using semi-supervised learning techniques, suitable for situations where
we have lack of annotated data. This is particularly common in e-commerce,
where images are readily available but labels are noisy, nonexistent, or
expensive to obtain. Our results demonstrate that Semi-ViT outperforms
traditional convolutional neural networks (CNN) and ViTs, even when fine-tuned
with limited annotated data. These findings indicate that Semi-ViTs hold
significant promise for applications that require precise and fine-grained
classification of visual data. | https://huggingface.co/papers/2305.10018 |
2023-05-18 | 2305.09761 | 1 | NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to
Robotics | This work was presented at the IEEE International Conference on Robotics and
Automation 2023 Workshop on Unconventional Spatial Representations.
Neural radiance fields (NeRFs) are a class of implicit scene representations
that model 3D environments from color images. NeRFs are expressive, and can
model the complex and multi-scale geometry of real world environments, which
potentially makes them a powerful tool for robotics applications. Modern NeRF
training libraries can generate a photo-realistic NeRF from a static data set
in just a few seconds, but are designed for offline use and require a slow pose
optimization pre-computation step.
In this work we propose NerfBridge, an open-source bridge between the Robot
Operating System (ROS) and the popular Nerfstudio library for real-time, online
training of NeRFs from a stream of images. NerfBridge enables rapid development
of research on applications of NeRFs in robotics by providing an extensible
interface to the efficient training pipelines and model libraries provided by
Nerfstudio. As an example use case we outline a hardware setup that can be used
NerfBridge to train a NeRF from images captured by a camera mounted to a
quadrotor in both indoor and outdoor environments.
For accompanying video https://youtu.be/EH0SLn-RcDg and code
https://github.com/javieryu/nerf_bridge. | https://huggingface.co/papers/2305.09761 |
2023-05-18 | 2305.09758 | 1 | A Video Is Worth 4096 Tokens: Verbalize Story Videos To Understand Them
In Zero Shot | Multimedia content, such as advertisements and story videos, exhibit a rich
blend of creativity and multiple modalities. They incorporate elements like
text, visuals, audio, and storytelling techniques, employing devices like
emotions, symbolism, and slogans to convey meaning. While previous research in
multimedia understanding has focused mainly on videos with specific actions
like cooking, there is a dearth of large annotated training datasets, hindering
the development of supervised learning models with satisfactory performance for
real-world applications. However, the rise of large language models (LLMs) has
witnessed remarkable zero-shot performance in various natural language
processing (NLP) tasks, such as emotion classification, question-answering, and
topic classification. To bridge this performance gap in multimedia
understanding, we propose verbalizing story videos to generate their
descriptions in natural language and then performing video-understanding tasks
on the generated story as opposed to the original video. Through extensive
experiments on five video-understanding tasks, we demonstrate that our method,
despite being zero-shot, achieves significantly better results than supervised
baselines for video understanding. Further, alleviating a lack of story
understanding benchmarks, we publicly release the first dataset on a crucial
task in computational social science, persuasion strategy identification. | https://huggingface.co/papers/2305.09758 |
2023-05-19 | 2305.10973 | 37 | Drag Your GAN: Interactive Point-based Manipulation on the Generative
Image Manifold | Synthesizing visual content that meets users' needs often requires flexible
and precise controllability of the pose, shape, expression, and layout of the
generated objects. Existing approaches gain controllability of generative
adversarial networks (GANs) via manually annotated training data or a prior 3D
model, which often lack flexibility, precision, and generality. In this work,
we study a powerful yet much less explored way of controlling GANs, that is, to
"drag" any points of the image to precisely reach target points in a
user-interactive manner, as shown in Fig.1. To achieve this, we propose
DragGAN, which consists of two main components: 1) a feature-based motion
supervision that drives the handle point to move towards the target position,
and 2) a new point tracking approach that leverages the discriminative
generator features to keep localizing the position of the handle points.
Through DragGAN, anyone can deform an image with precise control over where
pixels go, thus manipulating the pose, shape, expression, and layout of diverse
categories such as animals, cars, humans, landscapes, etc. As these
manipulations are performed on the learned generative image manifold of a GAN,
they tend to produce realistic outputs even for challenging scenarios such as
hallucinating occluded content and deforming shapes that consistently follow
the object's rigidity. Both qualitative and quantitative comparisons
demonstrate the advantage of DragGAN over prior approaches in the tasks of
image manipulation and point tracking. We also showcase the manipulation of
real images through GAN inversion. | https://huggingface.co/papers/2305.10973 |
2023-05-19 | 2305.10601 | 13 | Tree of Thoughts: Deliberate Problem Solving with Large Language Models | Language models are increasingly being deployed for general problem solving
across a wide range of tasks, but are still confined to token-level,
left-to-right decision-making processes during inference. This means they can
fall short in tasks that require exploration, strategic lookahead, or where
initial decisions play a pivotal role. To surmount these challenges, we
introduce a new framework for language model inference, Tree of Thoughts (ToT),
which generalizes over the popular Chain of Thought approach to prompting
language models, and enables exploration over coherent units of text (thoughts)
that serve as intermediate steps toward problem solving. ToT allows LMs to
perform deliberate decision making by considering multiple different reasoning
paths and self-evaluating choices to decide the next course of action, as well
as looking ahead or backtracking when necessary to make global choices. Our
experiments show that ToT significantly enhances language models'
problem-solving abilities on three novel tasks requiring non-trivial planning
or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in
Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of
tasks, our method achieved a success rate of 74%. Code repo with all prompts:
https://github.com/princeton-nlp/tree-of-thought-llm. | https://huggingface.co/papers/2305.10601 |
2023-05-19 | 2305.10853 | 12 | LDM3D: Latent Diffusion Model for 3D | This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that
generates both image and depth map data from a given text prompt, allowing
users to generate RGBD images from text prompts. The LDM3D model is fine-tuned
on a dataset of tuples containing an RGB image, depth map and caption, and
validated through extensive experiments. We also develop an application called
DepthFusion, which uses the generated RGB images and depth maps to create
immersive and interactive 360-degree-view experiences using TouchDesigner. This
technology has the potential to transform a wide range of industries, from
entertainment and gaming to architecture and design. Overall, this paper
presents a significant contribution to the field of generative AI and computer
vision, and showcases the potential of LDM3D and DepthFusion to revolutionize
content creation and digital experiences. A short video summarizing the
approach can be found at https://t.ly/tdi2. | https://huggingface.co/papers/2305.10853 |
2023-05-19 | 2305.10764 | 6 | OpenShape: Scaling Up 3D Shape Representation Towards Open-World
Understanding | We introduce OpenShape, a method for learning multi-modal joint
representations of text, image, and point clouds. We adopt the commonly used
multi-modal contrastive learning framework for representation alignment, but
with a specific focus on scaling up 3D representations to enable open-world 3D
shape understanding. To achieve this, we scale up training data by ensembling
multiple 3D datasets and propose several strategies to automatically filter and
enrich noisy text descriptions. We also explore and compare strategies for
scaling 3D backbone networks and introduce a novel hard negative mining module
for more efficient training. We evaluate OpenShape on zero-shot 3D
classification benchmarks and demonstrate its superior capabilities for
open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy
of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than
10% for existing methods. OpenShape also achieves an accuracy of 85.3% on
ModelNet40, outperforming previous zero-shot baseline methods by 20% and
performing on par with some fully-supervised methods. Furthermore, we show that
our learned embeddings encode a wide range of visual and semantic concepts
(e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D
and image-3D interactions. Due to their alignment with CLIP embeddings, our
learned shape representations can also be integrated with off-the-shelf
CLIP-based models for various applications, such as point cloud captioning and
point cloud-conditioned image generation. | https://huggingface.co/papers/2305.10764 |
2023-05-19 | 2305.11000 | 4 | SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal
Conversational Abilities | Multi-modal large language models are regarded as a crucial step towards
Artificial General Intelligence (AGI) and have garnered significant interest
with the emergence of ChatGPT. However, current speech-language models
typically adopt the cascade paradigm, preventing inter-modal knowledge
transfer. In this paper, we propose SpeechGPT, a large language model with
intrinsic cross-modal conversational abilities, capable of perceiving and
generating multi-model content. With discrete speech representations, we first
construct SpeechInstruct, a large-scale cross-modal speech instruction dataset.
Additionally, we employ a three-stage training strategy that includes
modality-adaptation pre-training, cross-modal instruction fine-tuning, and
chain-of-modality instruction fine-tuning. The experimental results demonstrate
that SpeechGPT has an impressive capacity to follow multi-modal human
instructions and highlight the potential of handling multiple modalities with
one model. Demos are shown in https://0nutation.github.io/SpeechGPT.github.io/. | https://huggingface.co/papers/2305.11000 |
2023-05-19 | 2305.11175 | 3 | VisionLLM: Large Language Model is also an Open-Ended Decoder for
Vision-Centric Tasks | Large language models (LLMs) have notably accelerated progress towards
artificial general intelligence (AGI), with their impressive zero-shot capacity
for user-tailored tasks, endowing them with immense potential across a range of
applications. However, in the field of computer vision, despite the
availability of numerous powerful vision foundation models (VFMs), they are
still restricted to tasks in a pre-defined form, struggling to match the
open-ended task capabilities of LLMs. In this work, we present an LLM-based
framework for vision-centric tasks, termed VisionLLM. This framework provides a
unified perspective for vision and language tasks by treating images as a
foreign language and aligning vision-centric tasks with language tasks that can
be flexibly defined and managed using language instructions. An LLM-based
decoder can then make appropriate predictions based on these instructions for
open-ended tasks. Extensive experiments show that the proposed VisionLLM can
achieve different levels of task customization through language instructions,
from fine-grained object-level to coarse-grained task-level customization, all
with good results. It's noteworthy that, with a generalist LLM-based framework,
our model can achieve over 60\% mAP on COCO, on par with detection-specific
models. We hope this model can set a new baseline for generalist vision and
language models. The demo shall be released based on
https://github.com/OpenGVLab/InternGPT. The code shall be released at
https://github.com/OpenGVLab/VisionLLM. | https://huggingface.co/papers/2305.11175 |
2023-05-19 | 2305.11147 | 3 | UniControl: A Unified Diffusion Model for Controllable Visual Generation
In the Wild | Achieving machine autonomy and human control often represent divergent
objectives in the design of interactive AI systems. Visual generative
foundation models such as Stable Diffusion show promise in navigating these
goals, especially when prompted with arbitrary languages. However, they often
fall short in generating images with spatial, structural, or geometric
controls. The integration of such controls, which can accommodate various
visual conditions in a single unified model, remains an unaddressed challenge.
In response, we introduce UniControl, a new generative foundation model that
consolidates a wide array of controllable condition-to-image (C2I) tasks within
a singular framework, while still allowing for arbitrary language prompts.
UniControl enables pixel-level-precise image generation, where visual
conditions primarily influence the generated structures and language prompts
guide the style and context. To equip UniControl with the capacity to handle
diverse visual conditions, we augment pretrained text-to-image diffusion models
and introduce a task-aware HyperNet to modulate the diffusion models, enabling
the adaptation to different C2I tasks simultaneously. Trained on nine unique
C2I tasks, UniControl demonstrates impressive zero-shot generation abilities
with unseen visual conditions. Experimental results show that UniControl often
surpasses the performance of single-task-controlled methods of comparable model
sizes. This control versatility positions UniControl as a significant
advancement in the realm of controllable visual generation. | https://huggingface.co/papers/2305.11147 |
2023-05-19 | 2305.10855 | 3 | TextDiffuser: Diffusion Models as Text Painters | Diffusion models have gained increasing attention for their impressive
generation abilities but currently struggle with rendering accurate and
coherent text. To address this issue, we introduce TextDiffuser, focusing on
generating images with visually appealing text that is coherent with
backgrounds. TextDiffuser consists of two stages: first, a Transformer model
generates the layout of keywords extracted from text prompts, and then
diffusion models generate images conditioned on the text prompt and the
generated layout. Additionally, we contribute the first large-scale text images
dataset with OCR annotations, MARIO-10M, containing 10 million image-text pairs
with text recognition, detection, and character-level segmentation annotations.
We further collect the MARIO-Eval benchmark to serve as a comprehensive tool
for evaluating text rendering quality. Through experiments and user studies, we
show that TextDiffuser is flexible and controllable to create high-quality text
images using text prompts alone or together with text template images, and
conduct text inpainting to reconstruct incomplete images with text. The code,
model, and dataset will be available at \url{https://aka.ms/textdiffuser}. | https://huggingface.co/papers/2305.10855 |
2023-05-19 | 2305.10763 | 3 | CLAPSpeech: Learning Prosody from Text Context with Contrastive
Language-Audio Pre-training | Improving text representation has attracted much attention to achieve
expressive text-to-speech (TTS). However, existing works only implicitly learn
the prosody with masked token reconstruction tasks, which leads to low training
efficiency and difficulty in prosody modeling. We propose CLAPSpeech, a
cross-modal contrastive pre-training framework that explicitly learns the
prosody variance of the same text token under different contexts. Specifically,
1) We encourage the model to connect the text context with its corresponding
prosody pattern in the joint multi-modal space with the elaborate design of the
encoder inputs and contrastive loss; 2) We introduce a multi-scale pre-training
pipeline to capture prosody patterns in multiple levels. We show how to
incorporate CLAPSpeech into existing TTS models for better prosody. Experiments
on three datasets not only show that CLAPSpeech could improve the prosody
prediction for existing TTS methods, but also demonstrate its generalization
ability to adapt to multiple languages and multi-speaker TTS. We also deeply
analyze the principle behind the performance of CLAPSpeech. Ablation studies
demonstrate the necessity of each component in our method. Source code and
audio samples are available at https://clapspeech.github.io. | https://huggingface.co/papers/2305.10763 |
2023-05-19 | 2305.10722 | 3 | Discriminative Diffusion Models as Few-shot Vision and Language Learners | Diffusion models, such as Stable Diffusion, have shown incredible performance
on text-to-image generation. Since text-to-image generation often requires
models to generate visual concepts with fine-grained details and attributes
specified in text prompts, can we leverage the powerful representations learned
by pre-trained diffusion models for discriminative tasks such as image-text
matching? To answer this question, we propose a novel approach, Discriminative
Stable Diffusion (DSD), which turns pre-trained text-to-image diffusion models
into few-shot discriminative learners. Our approach uses the cross-attention
score of a Stable Diffusion model to capture the mutual influence between
visual and textual information and fine-tune the model via attention-based
prompt learning to perform image-text matching. By comparing DSD with
state-of-the-art methods on several benchmark datasets, we demonstrate the
potential of using pre-trained diffusion models for discriminative tasks with
superior results on few-shot image-text matching. | https://huggingface.co/papers/2305.10722 |
2023-05-19 | 2305.11173 | 2 | Going Denser with Open-Vocabulary Part Segmentation | Object detection has been expanded from a limited number of categories to
open vocabulary. Moving forward, a complete intelligent vision system requires
understanding more fine-grained object descriptions, object parts. In this
paper, we propose a detector with the ability to predict both open-vocabulary
objects and their part segmentation. This ability comes from two designs.
First, we train the detector on the joint of part-level, object-level and
image-level data to build the multi-granularity alignment between language and
image. Second, we parse the novel object into its parts by its dense semantic
correspondence with the base object. These two designs enable the detector to
largely benefit from various data sources and foundation models. In
open-vocabulary part segmentation experiments, our method outperforms the
baseline by 3.3sim7.3 mAP in cross-dataset generalization on PartImageNet,
and improves the baseline by 7.3 novel AP_{50} in cross-category
generalization on Pascal Part. Finally, we train a detector that generalizes to
a wide range of part segmentation datasets while achieving better performance
than dataset-specific training. | https://huggingface.co/papers/2305.11173 |
2023-05-19 | 2305.11171 | 2 | TrueTeacher: Learning Factual Consistency Evaluation with Large Language
Models | Factual consistency evaluation is often conducted using Natural Language
Inference (NLI) models, yet these models exhibit limited success in evaluating
summaries. Previous work improved such models with synthetic training data.
However, the data is typically based on perturbed human-written summaries,
which often differ in their characteristics from real model-generated summaries
and have limited coverage of possible factual errors. Alternatively, large
language models (LLMs) have recently shown promising results in directly
evaluating generative tasks, but are too computationally expensive for
practical use. Motivated by these limitations, we introduce TrueTeacher, a
method for generating synthetic data by annotating diverse model-generated
summaries using a LLM. Unlike prior work, TrueTeacher does not rely on
human-written summaries, and is multilingual by nature. Experiments on the TRUE
benchmark show that a student model trained using our data, substantially
outperforms both the state-of-the-art model with similar capacity, and the LLM
teacher. In a systematic study, we compare TrueTeacher to existing synthetic
data generation methods and demonstrate its superiority and robustness to
domain-shift. We also show that our method generalizes to multilingual
scenarios. Lastly, we release our large scale synthetic dataset (1.4M
examples), generated using TrueTeacher, and a checkpoint trained on this data. | https://huggingface.co/papers/2305.11171 |
2023-05-19 | 2305.11129 | 2 | mLongT5: A Multilingual and Efficient Text-To-Text Transformer for
Longer Sequences | We present our work on developing a multilingual, efficient text-to-text
transformer that is suitable for handling long inputs. This model, called
mLongT5, builds upon the architecture of LongT5, while leveraging the
multilingual datasets used for pretraining mT5 and the pretraining tasks of
UL2. We evaluate this model on a variety of multilingual summarization and
question-answering tasks, and the results show stronger performance for mLongT5
when compared to existing multilingual models such as mBART or M-BERT. | https://huggingface.co/papers/2305.11129 |
2023-05-19 | 2305.10841 | 2 | GETMusic: Generating Any Music Tracks with a Unified Representation and
Diffusion Framework | Symbolic music generation aims to create musical notes, which can help users
compose music, such as generating target instrumental tracks from scratch, or
based on user-provided source tracks. Considering the diverse and flexible
combination between source and target tracks, a unified model capable of
generating any arbitrary tracks is of crucial necessity. Previous works fail to
address this need due to inherent constraints in music representations and
model architectures. To address this need, we propose a unified representation
and diffusion framework named GETMusic (`GET' stands for GEnerate music
Tracks), which includes a novel music representation named GETScore, and a
diffusion model named GETDiff. GETScore represents notes as tokens and
organizes them in a 2D structure, with tracks stacked vertically and
progressing horizontally over time. During training, tracks are randomly
selected as either the target or source. In the forward process, target tracks
are corrupted by masking their tokens, while source tracks remain as ground
truth. In the denoising process, GETDiff learns to predict the masked target
tokens, conditioning on the source tracks. With separate tracks in GETScore and
the non-autoregressive behavior of the model, GETMusic can explicitly control
the generation of any target tracks from scratch or conditioning on source
tracks. We conduct experiments on music generation involving six instrumental
tracks, resulting in a total of 665 combinations. GETMusic provides
high-quality results across diverse combinations and surpasses prior works
proposed for some specific combinations. | https://huggingface.co/papers/2305.10841 |
2023-05-19 | 2305.10434 | 2 | Learning the Visualness of Text Using Large Vision-Language Models | Visual text evokes an image in a person's mind, while non-visual text fails
to do so. A method to automatically detect visualness in text will unlock the
ability to augment text with relevant images, as neural text-to-image
generation and retrieval models operate on the implicit assumption that the
input text is visual in nature. We curate a dataset of 3,620 English sentences
and their visualness scores provided by multiple human annotators.
Additionally, we use documents that contain text and visual assets to create a
distantly supervised corpus of document text and associated images. We also
propose a fine-tuning strategy that adapts large vision-language models like
CLIP that assume a one-to-one correspondence between text and image to the task
of scoring text visualness from text input alone. Our strategy involves
modifying the model's contrastive learning objective to map text identified as
non-visual to a common NULL image while matching visual text to their
corresponding images in the document. We evaluate the proposed approach on its
ability to (i) classify visual and non-visual text accurately, and (ii) attend
over words that are identified as visual in psycholinguistic studies. Empirical
evaluation indicates that our approach performs better than several heuristics
and baseline models for the proposed task. Furthermore, to highlight the
importance of modeling the visualness of text, we conduct qualitative analyses
of text-to-image generation systems like DALL-E. | https://huggingface.co/papers/2305.10434 |
2023-05-19 | 2305.10912 | 1 | A Generalist Dynamics Model for Control | We investigate the use of transformer sequence models as dynamics models
(TDMs) for control. In a number of experiments in the DeepMind control suite,
we find that first, TDMs perform well in a single-environment learning setting
when compared to baseline models. Second, TDMs exhibit strong generalization
capabilities to unseen environments, both in a few-shot setting, where a
generalist model is fine-tuned with small amounts of data from the target
environment, and in a zero-shot setting, where a generalist model is applied to
an unseen environment without any further training. We further demonstrate that
generalizing system dynamics can work much better than generalizing optimal
behavior directly as a policy. This makes TDMs a promising ingredient for a
foundation model of control. | https://huggingface.co/papers/2305.10912 |
2023-05-19 | 2305.10874 | 1 | VideoFactory: Swap Attention in Spatiotemporal Diffusions for
Text-to-Video Generation | With the explosive popularity of AI-generated content (AIGC), video
generation has recently received a lot of attention. Generating videos guided
by text instructions poses significant challenges, such as modeling the complex
relationship between space and time, and the lack of large-scale text-video
paired data. Existing text-video datasets suffer from limitations in both
content quality and scale, or they are not open-source, rendering them
inaccessible for study and use. For model design, previous approaches extend
pretrained text-to-image generation models by adding temporal 1D
convolution/attention modules for video generation. However, these approaches
overlook the importance of jointly modeling space and time, inevitably leading
to temporal distortions and misalignment between texts and videos. In this
paper, we propose a novel approach that strengthens the interaction between
spatial and temporal perceptions. In particular, we utilize a swapped
cross-attention mechanism in 3D windows that alternates the "query" role
between spatial and temporal blocks, enabling mutual reinforcement for each
other. Moreover, to fully unlock model capabilities for high-quality video
generation and promote the development of the field, we curate a large-scale
and open-source video dataset called HD-VG-130M. This dataset comprises 130
million text-video pairs from the open-domain, ensuring high-definition,
widescreen and watermark-free characters. A smaller-scale yet more meticulously
cleaned subset further enhances the data quality, aiding models in achieving
superior performance. Experimental quantitative and qualitative results
demonstrate the superiority of our approach in terms of per-frame quality,
temporal correlation, and text-video alignment, with clear margins. | https://huggingface.co/papers/2305.10874 |
2023-05-19 | 2305.10688 | 1 | MolXPT: Wrapping Molecules with Text for Generative Pre-training | Generative pre-trained Transformer (GPT) has demonstrates its great success
in natural language processing and related techniques have been adapted into
molecular modeling. Considering that text is the most important record for
scientific discovery, in this paper, we propose MolXPT, a unified language
model of text and molecules pre-trained on SMILES (a sequence representation of
molecules) wrapped by text. Briefly, we detect the molecule names in each
sequence and replace them to the corresponding SMILES. In this way, the SMILES
could leverage the information from surrounding text, and vice versa. The above
wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem
are all fed into a language model for pre-training. Experimental results
demonstrate that MolXPT outperforms strong baselines of molecular property
prediction on MoleculeNet, performs comparably to the best model in
text-molecule translation while using less than half of its parameters, and
enables zero-shot molecular generation without finetuning. | https://huggingface.co/papers/2305.10688 |
2023-05-19 | 2305.10474 | 1 | Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models | Despite tremendous progress in generating high-quality images using diffusion
models, synthesizing a sequence of animated frames that are both photorealistic
and temporally coherent is still in its infancy. While off-the-shelf
billion-scale datasets for image generation are available, collecting similar
video data of the same scale is still challenging. Also, training a video
diffusion model is computationally much more expensive than its image
counterpart. In this work, we explore finetuning a pretrained image diffusion
model with video data as a practical solution for the video synthesis task. We
find that naively extending the image noise prior to video noise prior in video
diffusion leads to sub-optimal performance. Our carefully designed video noise
prior leads to substantially better performance. Extensive experimental
validation shows that our model, Preserve Your Own Correlation (PYoCo), attains
SOTA zero-shot text-to-video results on the UCF-101 and MSR-VTT benchmarks. It
also achieves SOTA video generation quality on the small-scale UCF-101
benchmark with a 10times smaller model using significantly less computation
than the prior art. | https://huggingface.co/papers/2305.10474 |
2023-05-21 | 2305.11206 | 26 | LIMA: Less Is More for Alignment | Large language models are trained in two stages: (1) unsupervised pretraining
from raw text, to learn general-purpose representations, and (2) large scale
instruction tuning and reinforcement learning, to better align to end tasks and
user preferences. We measure the relative importance of these two stages by
training LIMA, a 65B parameter LLaMa language model fine-tuned with the
standard supervised loss on only 1,000 carefully curated prompts and responses,
without any reinforcement learning or human preference modeling. LIMA
demonstrates remarkably strong performance, learning to follow specific
response formats from only a handful of examples in the training data,
including complex queries that range from planning trip itineraries to
speculating about alternate history. Moreover, the model tends to generalize
well to unseen tasks that did not appear in the training data. In a controlled
human study, responses from LIMA are either equivalent or strictly preferred to
GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard
and 65% versus DaVinci003, which was trained with human feedback. Taken
together, these results strongly suggest that almost all knowledge in large
language models is learned during pretraining, and only limited instruction
tuning data is necessary to teach models to produce high quality output. | https://huggingface.co/papers/2305.11206 |
2023-05-21 | 2305.11846 | 4 | Any-to-Any Generation via Composable Diffusion | We present Composable Diffusion (CoDi), a novel generative model capable of
generating any combination of output modalities, such as language, image,
video, or audio, from any combination of input modalities. Unlike existing
generative AI systems, CoDi can generate multiple modalities in parallel and
its input is not limited to a subset of modalities like text or image. Despite
the absence of training datasets for many combinations of modalities, we
propose to align modalities in both the input and output space. This allows
CoDi to freely condition on any input combination and generate any group of
modalities, even if they are not present in the training data. CoDi employs a
novel composable generation strategy which involves building a shared
multimodal space by bridging alignment in the diffusion process, enabling the
synchronized generation of intertwined modalities, such as temporally aligned
video and audio. Highly customizable and flexible, CoDi achieves strong
joint-modality generation quality, and outperforms or is on par with the
unimodal state-of-the-art for single-modality synthesis. The project page with
demonstrations and code is at https://codi-gen.github.io | https://huggingface.co/papers/2305.11846 |
2023-05-21 | 2305.11870 | 3 | Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D
Diffusion Probabilistic Models | We propose a 3D generation pipeline that uses diffusion models to generate
realistic human digital avatars. Due to the wide variety of human identities,
poses, and stochastic details, the generation of 3D human meshes has been a
challenging problem. To address this, we decompose the problem into 2D normal
map generation and normal map-based 3D reconstruction. Specifically, we first
simultaneously generate realistic normal maps for the front and backside of a
clothed human, dubbed dual normal maps, using a pose-conditional diffusion
model. For 3D reconstruction, we ``carve'' the prior SMPL-X mesh to a detailed
3D mesh according to the normal maps through mesh optimization. To further
enhance the high-frequency details, we present a diffusion resampling scheme on
both body and facial regions, thus encouraging the generation of realistic
digital avatars. We also seamlessly incorporate a recent text-to-image
diffusion model to support text-based human identity control. Our method,
namely, Chupa, is capable of generating realistic 3D clothed humans with better
perceptual quality and identity variety. | https://huggingface.co/papers/2305.11870 |
2023-05-21 | 2305.11588 | 3 | Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields | Text-driven 3D scene generation is widely applicable to video gaming, film
industry, and metaverse applications that have a large demand for 3D scenes.
However, existing text-to-3D generation methods are limited to producing 3D
objects with simple geometries and dreamlike styles that lack realism. In this
work, we present Text2NeRF, which is able to generate a wide range of 3D scenes
with complicated geometric structures and high-fidelity textures purely from a
text prompt. To this end, we adopt NeRF as the 3D representation and leverage a
pre-trained text-to-image diffusion model to constrain the 3D reconstruction of
the NeRF to reflect the scene description. Specifically, we employ the
diffusion model to infer the text-related image as the content prior and use a
monocular depth estimation method to offer the geometric prior. Both content
and geometric priors are utilized to update the NeRF model. To guarantee
textured and geometric consistency between different views, we introduce a
progressive scene inpainting and updating strategy for novel view synthesis of
the scene. Our method requires no additional training data but only a natural
language description of the scene as the input. Extensive experiments
demonstrate that our Text2NeRF outperforms existing methods in producing
photo-realistic, multi-view consistent, and diverse 3D scenes from a variety of
natural language prompts. | https://huggingface.co/papers/2305.11588 |
2023-05-21 | 2305.11337 | 3 | RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent
Geometry and Texture | The techniques for 3D indoor scene capturing are widely used, but the meshes
produced leave much to be desired. In this paper, we propose "RoomDreamer",
which leverages powerful natural language to synthesize a new room with a
different style. Unlike existing image synthesis methods, our work addresses
the challenge of synthesizing both geometry and texture aligned to the input
scene structure and prompt simultaneously. The key insight is that a scene
should be treated as a whole, taking into account both scene texture and
geometry. The proposed framework consists of two significant components:
Geometry Guided Diffusion and Mesh Optimization. Geometry Guided Diffusion for
3D Scene guarantees the consistency of the scene style by applying the 2D prior
to the entire scene simultaneously. Mesh Optimization improves the geometry and
texture jointly and eliminates the artifacts in the scanned scene. To validate
the proposed method, real indoor scenes scanned with smartphones are used for
extensive experiments, through which the effectiveness of our method is
demonstrated. | https://huggingface.co/papers/2305.11337 |
2023-05-21 | 2305.11675 | 1 | Cinematic Mindscapes: High-quality Video Reconstruction from Brain
Activity | Reconstructing human vision from brain activities has been an appealing task
that helps to understand our cognitive process. Even though recent research has
seen great success in reconstructing static images from non-invasive brain
recordings, work on recovering continuous visual experiences in the form of
videos is limited. In this work, we propose Mind-Video that learns
spatiotemporal information from continuous fMRI data of the cerebral cortex
progressively through masked brain modeling, multimodal contrastive learning
with spatiotemporal attention, and co-training with an augmented Stable
Diffusion model that incorporates network temporal inflation. We show that
high-quality videos of arbitrary frame rates can be reconstructed with
Mind-Video using adversarial guidance. The recovered videos were evaluated with
various semantic and pixel-level metrics. We achieved an average accuracy of
85% in semantic classification tasks and 0.19 in structural similarity index
(SSIM), outperforming the previous state-of-the-art by 45%. We also show that
our model is biologically plausible and interpretable, reflecting established
physiological processes. | https://huggingface.co/papers/2305.11675 |
2023-05-22 | 2305.13048 | 19 | RWKV: Reinventing RNNs for the Transformer Era | Transformers have revolutionized almost all natural language processing (NLP)
tasks but suffer from memory and computational complexity that scales
quadratically with sequence length. In contrast, recurrent neural networks
(RNNs) exhibit linear scaling in memory and computational requirements but
struggle to match the same performance as Transformers due to limitations in
parallelization and scalability. We propose a novel model architecture,
Receptance Weighted Key Value (RWKV), that combines the efficient
parallelizable training of Transformers with the efficient inference of RNNs.
Our approach leverages a linear attention mechanism and allows us to formulate
the model as either a Transformer or an RNN, which parallelizes computations
during training and maintains constant computational and memory complexity
during inference, leading to the first non-transformer architecture to be
scaled to tens of billions of parameters. Our experiments reveal that RWKV
performs on par with similarly sized Transformers, suggesting that future work
can leverage this architecture to create more efficient models. This work
presents a significant step towards reconciling the trade-offs between
computational efficiency and model performance in sequence processing tasks. | https://huggingface.co/papers/2305.13048 |
2023-05-22 | 2305.11738 | 8 | CRITIC: Large Language Models Can Self-Correct with Tool-Interactive
Critiquing | Recent developments in large language models (LLMs) have been impressive.
However, these models sometimes show inconsistencies and problematic behavior,
such as hallucinating facts, generating flawed code, or creating offensive and
toxic content. Unlike these models, humans typically utilize external tools to
cross-check and refine their initial content, like using a search engine for
fact-checking, or a code interpreter for debugging. Inspired by this
observation, we introduce a framework called CRITIC that allows LLMs, which are
essentially "black boxes" to validate and progressively amend their own outputs
in a manner similar to human interaction with tools. More specifically,
starting with an initial output, CRITIC interacts with appropriate tools to
evaluate certain aspects of the text, and then revises the output based on the
feedback obtained during this validation process. Comprehensive evaluations
involving free-form question answering, mathematical program synthesis, and
toxicity reduction demonstrate that CRITIC consistently enhances the
performance of LLMs. Meanwhile, our research highlights the crucial importance
of external feedback in promoting the ongoing self-improvement of LLMs. | https://huggingface.co/papers/2305.11738 |
2023-05-22 | 2305.13077 | 7 | ControlVideo: Training-free Controllable Text-to-Video Generation | Text-driven diffusion models have unlocked unprecedented abilities in image
generation, whereas their video counterpart still lags behind due to the
excessive training cost of temporal modeling. Besides the training burden, the
generated videos also suffer from appearance inconsistency and structural
flickers, especially in long video synthesis. To address these challenges, we
design a training-free framework called ControlVideo to enable
natural and efficient text-to-video generation. ControlVideo, adapted from
ControlNet, leverages coarsely structural consistency from input motion
sequences, and introduces three modules to improve video generation. Firstly,
to ensure appearance coherence between frames, ControlVideo adds fully
cross-frame interaction in self-attention modules. Secondly, to mitigate the
flicker effect, it introduces an interleaved-frame smoother that employs frame
interpolation on alternated frames. Finally, to produce long videos
efficiently, it utilizes a hierarchical sampler that separately synthesizes
each short clip with holistic coherency. Empowered with these modules,
ControlVideo outperforms the state-of-the-arts on extensive motion-prompt pairs
quantitatively and qualitatively. Notably, thanks to the efficient designs, it
generates both short and long videos within several minutes using one NVIDIA
2080Ti. Code is available at https://github.com/YBYBZhang/ControlVideo. | https://huggingface.co/papers/2305.13077 |
2023-05-22 | 2305.11854 | 5 | Multimodal Web Navigation with Instruction-Finetuned Foundation Models | The progress of autonomous web navigation has been hindered by the dependence
on billions of exploratory interactions via online reinforcement learning, and
domain-specific model designs that make it difficult to leverage generalization
from rich out-of-domain data. In this work, we study data-driven offline
training for web agents with vision-language foundation models. We propose an
instruction-following multimodal agent, WebGUM, that observes both webpage
screenshots and HTML pages and outputs web navigation actions, such as click
and type. WebGUM is trained by jointly finetuning an instruction-finetuned
language model and a vision encoder with temporal and local perception on a
large corpus of demonstrations. We empirically demonstrate this recipe improves
the agent's ability of grounded multimodal perception, HTML comprehension, and
multi-step reasoning, outperforming prior works by a significant margin. On the
MiniWoB, we improve over the previous best offline methods by more than 45.8%,
even outperforming online-finetuned SoTA, humans, and GPT-4-based agent. On the
WebShop benchmark, our 3-billion-parameter model achieves superior performance
to the existing SoTA, PaLM-540B. Furthermore, WebGUM exhibits strong positive
transfer to the real-world planning tasks on the Mind2Web. We also collect 347K
high-quality demonstrations using our trained models, 38 times larger than
prior work, and make them available to promote future research in this
direction. | https://huggingface.co/papers/2305.11854 |
2023-05-22 | 2305.13301 | 4 | Training Diffusion Models with Reinforcement Learning | Diffusion models are a class of flexible generative models trained with an
approximation to the log-likelihood objective. However, most use cases of
diffusion models are not concerned with likelihoods, but instead with
downstream objectives such as human-perceived image quality or drug
effectiveness. In this paper, we investigate reinforcement learning methods for
directly optimizing diffusion models for such objectives. We describe how
posing denoising as a multi-step decision-making problem enables a class of
policy gradient algorithms, which we refer to as denoising diffusion policy
optimization (DDPO), that are more effective than alternative reward-weighted
likelihood approaches. Empirically, DDPO is able to adapt text-to-image
diffusion models to objectives that are difficult to express via prompting,
such as image compressibility, and those derived from human feedback, such as
aesthetic quality. Finally, we show that DDPO can improve prompt-image
alignment using feedback from a vision-language model without the need for
additional data collection or human annotation. The project's website can be
found at http://rl-diffusion.github.io . | https://huggingface.co/papers/2305.13301 |
2023-05-22 | 2305.13050 | 3 | AudioToken: Adaptation of Text-Conditioned Diffusion Models for
Audio-to-Image Generation | In recent years, image generation has shown a great leap in performance,
where diffusion models play a central role. Although generating high-quality
images, such models are mainly conditioned on textual descriptions. This begs
the question: "how can we adopt such models to be conditioned on other
modalities?". In this paper, we propose a novel method utilizing latent
diffusion models trained for text-to-image-generation to generate images
conditioned on audio recordings. Using a pre-trained audio encoding model, the
proposed method encodes audio into a new token, which can be considered as an
adaptation layer between the audio and text representations. Such a modeling
paradigm requires a small number of trainable parameters, making the proposed
approach appealing for lightweight optimization. Results suggest the proposed
method is superior to the evaluated baseline methods, considering objective and
subjective metrics. Code and samples are available at:
https://pages.cs.huji.ac.il/adiyoss-lab/AudioToken. | https://huggingface.co/papers/2305.13050 |
2023-05-22 | 2305.11841 | 3 | How Does Generative Retrieval Scale to Millions of Passages? | Popularized by the Differentiable Search Index, the emerging paradigm of
generative retrieval re-frames the classic information retrieval problem into a
sequence-to-sequence modeling task, forgoing external indices and encoding an
entire document corpus within a single Transformer. Although many different
approaches have been proposed to improve the effectiveness of generative
retrieval, they have only been evaluated on document corpora on the order of
100k in size. We conduct the first empirical study of generative retrieval
techniques across various corpus scales, ultimately scaling up to the entire MS
MARCO passage ranking task with a corpus of 8.8M passages and evaluating model
sizes up to 11B parameters. We uncover several findings about scaling
generative retrieval to millions of passages; notably, the central importance
of using synthetic queries as document representations during indexing, the
ineffectiveness of existing proposed architecture modifications when accounting
for compute cost, and the limits of naively scaling model parameters with
respect to retrieval performance. While we find that generative retrieval is
competitive with state-of-the-art dual encoders on small corpora, scaling to
millions of passages remains an important and unsolved challenge. We believe
these findings will be valuable for the community to clarify the current state
of generative retrieval, highlight the unique challenges, and inspire new
research directions. | https://huggingface.co/papers/2305.11841 |
2023-05-22 | 2305.11834 | 2 | Pengi: An Audio Language Model for Audio Tasks | In the domain of audio processing, Transfer Learning has facilitated the rise
of Self-Supervised Learning and Zero-Shot Learning techniques. These approaches
have led to the development of versatile models capable of tackling a wide
array of tasks, while delivering state-of-the-art performance. However, current
models inherently lack the capacity to produce the requisite language for
open-ended tasks, such as Audio Captioning or Audio Question & Answering. We
introduce Pengi, a novel Audio Language Model that leverages Transfer Learning
by framing all audio tasks as text-generation tasks. It takes as input, an
audio recording, and text, and generates free-form text as output. The input
audio is represented as a sequence of continuous embeddings by an audio
encoder. A text encoder does the same for the corresponding text input. Both
sequences are combined as a prefix to prompt a pre-trained frozen language
model. The unified architecture of Pengi enables open-ended tasks and
close-ended tasks without any additional fine-tuning or task-specific
extensions. When evaluated on 22 downstream tasks, our approach yields
state-of-the-art performance in several of them. Our results show that
connecting language models with audio models is a major step towards
general-purpose audio understanding | https://huggingface.co/papers/2305.11834 |
2023-05-22 | 2305.11778 | 2 | Cross-Lingual Supervision improves Large Language Models Pre-training | The recent rapid progress in pre-training Large Language Models has relied on
using self-supervised language modeling objectives like next token prediction
or span corruption. On the other hand, Machine Translation Systems are mostly
trained using cross-lingual supervision that requires aligned data between
source and target languages. We demonstrate that pre-training Large Language
Models on a mixture of a self-supervised Language Modeling objective and the
supervised Machine Translation objective, therefore including cross-lingual
parallel data during pre-training, yields models with better in-context
learning abilities. As pre-training is a very resource-intensive process and a
grid search on the best mixing ratio between the two objectives is
prohibitively expensive, we propose a simple yet effective strategy to learn it
during pre-training. | https://huggingface.co/papers/2305.11778 |
2023-05-22 | 2305.11759 | 2 | Controlling the Extraction of Memorized Data from Large Language Models
via Prompt-Tuning | Large Language Models (LLMs) are known to memorize significant portions of
their training data. Parts of this memorized content have been shown to be
extractable by simply querying the model, which poses a privacy risk. We
present a novel approach which uses prompt-tuning to control the extraction
rates of memorized content in LLMs. We present two prompt training strategies
to increase and decrease extraction rates, which correspond to an attack and a
defense, respectively. We demonstrate the effectiveness of our techniques by
using models from the GPT-Neo family on a public benchmark. For the 1.3B
parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in
extraction rate compared to our baseline. Our defense can be tuned to achieve
different privacy-utility trade-offs by a user-specified hyperparameter. We
achieve an extraction rate reduction of up to 97.7% relative to our baseline,
with a perplexity increase of 16.9%. | https://huggingface.co/papers/2305.11759 |
2023-05-22 | 2305.11364 | 2 | Visualizing Linguistic Diversity of Text Datasets Synthesized by Large
Language Models | Large language models (LLMs) can be used to generate smaller, more refined
datasets via few-shot prompting for benchmarking, fine-tuning or other use
cases. However, understanding and evaluating these datasets is difficult, and
the failure modes of LLM-generated data are still not well understood.
Specifically, the data can be repetitive in surprising ways, not only
semantically but also syntactically and lexically. We present LinguisticLens, a
novel inter-active visualization tool for making sense of and analyzing
syntactic diversity of LLM-generated datasets. LinguisticLens clusters text
along syntactic, lexical, and semantic axes. It supports hierarchical
visualization of a text dataset, allowing users to quickly scan for an overview
and inspect individual examples. The live demo is available at
shorturl.at/zHOUV. | https://huggingface.co/papers/2305.11364 |
2023-05-22 | 2305.11863 | 1 | Scaling laws for language encoding models in fMRI | Representations from transformer-based unidirectional language models are
known to be effective at predicting brain responses to natural language.
However, most studies comparing language models to brains have used GPT-2 or
similarly sized language models. Here we tested whether larger open-source
models such as those from the OPT and LLaMA families are better at predicting
brain responses recorded using fMRI. Mirroring scaling results from other
contexts, we found that brain prediction performance scales log-linearly with
model size from 125M to 30B parameter models, with ~15% increased encoding
performance as measured by correlation with a held-out test set across 3
subjects. Similar log-linear behavior was observed when scaling the size of the
fMRI training set. We also characterized scaling for acoustic encoding models
that use HuBERT, WavLM, and Whisper, and we found comparable improvements with
model size. A noise ceiling analysis of these large, high-performance encoding
models showed that performance is nearing the theoretical maximum for brain
areas such as the precuneus and higher auditory cortex. These results suggest
that increasing scale in both models and data will yield incredibly effective
models of language processing in the brain, enabling better scientific
understanding as well as applications such as decoding. | https://huggingface.co/papers/2305.11863 |
2023-05-22 | 2305.11840 | 1 | SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage
Leveraging Generative Models | Stereotype benchmark datasets are crucial to detect and mitigate social
stereotypes about groups of people in NLP models. However, existing datasets
are limited in size and coverage, and are largely restricted to stereotypes
prevalent in the Western society. This is especially problematic as language
technologies gain hold across the globe. To address this gap, we present
SeeGULL, a broad-coverage stereotype dataset, built by utilizing generative
capabilities of large language models such as PaLM, and GPT-3, and leveraging a
globally diverse rater pool to validate the prevalence of those stereotypes in
society. SeeGULL is in English, and contains stereotypes about identity groups
spanning 178 countries across 8 different geo-political regions across 6
continents, as well as state-level identities within the US and India. We also
include fine-grained offensiveness scores for different stereotypes and
demonstrate their global disparities. Furthermore, we include comparative
annotations about the same groups by annotators living in the region vs. those
that are based in North America, and demonstrate that within-region stereotypes
about groups differ from those prevalent in North America. CONTENT WARNING:
This paper contains stereotype examples that may be offensive. | https://huggingface.co/papers/2305.11840 |
2023-05-22 | 2305.11837 | 1 | Comparing Software Developers with ChatGPT: An Empirical Investigation | The advent of automation in particular Software Engineering (SE) tasks has
transitioned from theory to reality. Numerous scholarly articles have
documented the successful application of Artificial Intelligence to address
issues in areas such as project management, modeling, testing, and development.
A recent innovation is the introduction of ChatGPT, an ML-infused chatbot,
touted as a resource proficient in generating programming codes and formulating
software testing strategies for developers and testers respectively. Although
there is speculation that AI-based computation can increase productivity and
even substitute software engineers in software development, there is currently
a lack of empirical evidence to verify this. Moreover, despite the primary
focus on enhancing the accuracy of AI systems, non-functional requirements
including energy efficiency, vulnerability, fairness (i.e., human bias), and
safety frequently receive insufficient attention. This paper posits that a
comprehensive comparison of software engineers and AI-based solutions,
considering various evaluation criteria, is pivotal in fostering human-machine
collaboration, enhancing the reliability of AI-based methods, and understanding
task suitability for humans or AI. Furthermore, it facilitates the effective
implementation of cooperative work structures and human-in-the-loop processes.
This paper conducts an empirical investigation, contrasting the performance of
software engineers and AI systems, like ChatGPT, across different evaluation
metrics. The empirical study includes a case of assessing ChatGPT-generated
code versus code produced by developers and uploaded in Leetcode. | https://huggingface.co/papers/2305.11837 |
2023-05-22 | 2305.11694 | 1 | QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set
Operations | Formulating selective information needs results in queries that implicitly
specify set operations, such as intersection, union, and difference. For
instance, one might search for "shorebirds that are not sandpipers" or
"science-fiction films shot in England". To study the ability of retrieval
systems to meet such information needs, we construct QUEST, a dataset of 3357
natural language queries with implicit set operations, that map to a set of
entities corresponding to Wikipedia documents. The dataset challenges models to
match multiple constraints mentioned in queries with corresponding evidence in
documents and correctly perform various set operations. The dataset is
constructed semi-automatically using Wikipedia category names. Queries are
automatically composed from individual categories, then paraphrased and further
validated for naturalness and fluency by crowdworkers. Crowdworkers also assess
the relevance of entities based on their documents and highlight attribution of
query constraints to spans of document text. We analyze several modern
retrieval systems, finding that they often struggle on such queries. Queries
involving negation and conjunction are particularly challenging and systems are
further challenged with combinations of these operations. | https://huggingface.co/papers/2305.11694 |
2023-05-22 | 2305.11598 | 1 | Introspective Tips: Large Language Model for In-Context Decision Making | The emergence of large language models (LLMs) has substantially influenced
natural language processing, demonstrating exceptional results across various
tasks. In this study, we employ ``Introspective Tips" to facilitate LLMs in
self-optimizing their decision-making. By introspectively examining
trajectories, LLM refines its policy by generating succinct and valuable tips.
Our method enhances the agent's performance in both few-shot and zero-shot
learning situations by considering three essential scenarios: learning from the
agent's past experiences, integrating expert demonstrations, and generalizing
across diverse games. Importantly, we accomplish these improvements without
fine-tuning the LLM parameters; rather, we adjust the prompt to generalize
insights from the three aforementioned situations. Our framework not only
supports but also emphasizes the advantage of employing LLM in in-contxt
decision-making. Experiments involving over 100 games in TextWorld illustrate
the superior performance of our approach. | https://huggingface.co/papers/2305.11598 |
2023-05-22 | 2305.11541 | 1 | Empower Large Language Model to Perform Better on Industrial
Domain-Specific Question Answering | Large Language Model (LLM) has gained popularity and achieved remarkable
results in open-domain tasks, but its performance in real industrial
domain-specific scenarios is average due to its lack of specific domain
knowledge. This issue has attracted widespread attention, but there are few
relevant benchmarks available. In this paper, we provide a benchmark Question
Answering (QA) dataset named MSQA, centered around Microsoft products and IT
technical problems encountered by customers. This dataset contains industry
cloud-specific QA knowledge, an area not extensively covered in general LLMs,
making it well-suited for evaluating methods aiming to enhance LLMs'
domain-specific capabilities. In addition, we propose a new model interaction
paradigm that can empower LLM to achieve better performance on domain-specific
tasks where it is not proficient. Extensive experiments demonstrate that the
approach following our method outperforms the commonly used LLM with retrieval
methods. We make our source code and sample data available at:
https://aka.ms/Microsoft_QA. | https://huggingface.co/papers/2305.11541 |
2023-05-22 | 2305.11308 | 1 | Counterfactuals for Design: A Model-Agnostic Method For Design
Recommendations | Designers may often ask themselves how to adjust their design concepts to
achieve demanding functional goals. To answer such questions, designers must
often consider counterfactuals, weighing design alternatives and their
projected performance. This paper introduces Multi-objective Counterfactuals
for Design (MCD), a computational tool that automates and streamlines the
counterfactual search process and recommends targeted design modifications that
meet designers' unique requirements. MCD improves upon existing counterfactual
search methods by supporting multi-objective requirements, which are crucial in
design problems, and by decoupling the counterfactual search and sampling
processes, thus enhancing efficiency and facilitating objective trade-off
visualization. The paper showcases MCD's capabilities in complex engineering
tasks using three demonstrative bicycle design challenges. In the first, MCD
effectively identifies design modifications that quantifiably enhance
functional performance, strengthening the bike frame and saving weight. In the
second, MCD modifies parametric bike models in a cross-modal fashion to
resemble subjective text prompts or reference images. In a final
multidisciplinary case study, MCD tackles all the quantitative and subjective
design requirements introduced in the first two problems, while simultaneously
customizing a bike design to an individual rider's biomechanical attributes. By
exploring hypothetical design alterations and their impact on multiple design
objectives, MCD recommends effective design modifications for practitioners
seeking to make targeted enhancements to their designs. The code, test
problems, and datasets used in the paper are available to the public at
decode.mit.edu/projects/counterfactuals/. | https://huggingface.co/papers/2305.11308 |
2023-05-22 | 2305.11243 | 1 | Comparing Machines and Children: Using Developmental Psychology
Experiments to Assess the Strengths and Weaknesses of LaMDA Responses | Developmental psychologists have spent decades devising experiments to test
the intelligence and knowledge of infants and children, tracing the origin of
crucial concepts and capacities. Moreover, experimental techniques in
developmental psychology have been carefully designed to discriminate the
cognitive capacities that underlie particular behaviors. We propose that using
classical experiments from child development is a particularly effective way to
probe the computational abilities of AI models, in general, and LLMs in
particular. First, the methodological techniques of developmental psychology,
such as the use of novel stimuli to control for past experience or control
conditions to determine whether children are using simple associations, can be
equally helpful for assessing the capacities of LLMs. In parallel, testing LLMs
in this way can tell us whether the information that is encoded in text is
sufficient to enable particular responses, or whether those responses depend on
other kinds of information, such as information from exploration of the
physical world. In this work we adapt classical developmental experiments to
evaluate the capabilities of LaMDA, a large language model from Google. We
propose a novel LLM Response Score (LRS) metric which can be used to evaluate
other language models, such as GPT. We find that LaMDA generates appropriate
responses that are similar to those of children in experiments involving social
understanding, perhaps providing evidence that knowledge of these domains is
discovered through language. On the other hand, LaMDA's responses in early
object and action understanding, theory of mind, and especially causal
reasoning tasks are very different from those of young children, perhaps
showing that these domains require more real-world, self-initiated exploration
and cannot simply be learned from patterns in language input. | https://huggingface.co/papers/2305.11243 |
2023-05-23 | 2305.14314 | 55 | QLoRA: Efficient Finetuning of Quantized LLMs | We present QLoRA, an efficient finetuning approach that reduces memory usage
enough to finetune a 65B parameter model on a single 48GB GPU while preserving
full 16-bit finetuning task performance. QLoRA backpropagates gradients through
a frozen, 4-bit quantized pretrained language model into Low Rank
Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all
previous openly released models on the Vicuna benchmark, reaching 99.3% of the
performance level of ChatGPT while only requiring 24 hours of finetuning on a
single GPU. QLoRA introduces a number of innovations to save memory without
sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is
information theoretically optimal for normally distributed weights (b) double
quantization to reduce the average memory footprint by quantizing the
quantization constants, and (c) paged optimziers to manage memory spikes. We
use QLoRA to finetune more than 1,000 models, providing a detailed analysis of
instruction following and chatbot performance across 8 instruction datasets,
multiple model types (LLaMA, T5), and model scales that would be infeasible to
run with regular finetuning (e.g. 33B and 65B parameter models). Our results
show that QLoRA finetuning on a small high-quality dataset leads to
state-of-the-art results, even when using smaller models than the previous
SoTA. We provide a detailed analysis of chatbot performance based on both human
and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable
alternative to human evaluation. Furthermore, we find that current chatbot
benchmarks are not trustworthy to accurately evaluate the performance levels of
chatbots. A lemon-picked analysis demonstrates where Guanaco fails compared to
ChatGPT. We release all of our models and code, including CUDA kernels for
4-bit training. | https://huggingface.co/papers/2305.14314 |
2023-05-23 | 2305.14233 | 6 | Enhancing Chat Language Models by Scaling High-quality Instructional
Conversations | Fine-tuning on instruction data has been widely validated as an effective
practice for implementing chat language models like ChatGPT. Scaling the
diversity and quality of such data, although straightforward, stands a great
chance of leading to improved performance. This paper aims to improve the upper
bound of open-source models further. We first provide a systematically
designed, diverse, informative, large-scale dataset of instructional
conversations, UltraChat, which does not involve human queries. Our objective
is to capture the breadth of interactions that a human might have with an AI
assistant and employs a comprehensive framework to generate multi-turn
conversation iteratively. UltraChat contains 1.5 million high-quality
multi-turn dialogues and covers a wide range of topics and instructions. Our
statistical analysis of UltraChat reveals its superiority in various key
metrics, including scale, average length, diversity, coherence, etc.,
solidifying its position as a leading open-source dataset. Building upon
UltraChat, we fine-tune a LLaMA model to create a powerful conversational
model, UltraLLaMA. Our evaluations indicate that UltraLLaMA consistently
outperforms other open-source models, including Vicuna, the previously
recognized state-of-the-art open-source model. The dataset and the model will
be publicly released\url{https://github.com/thunlp/UltraChat}. | https://huggingface.co/papers/2305.14233 |
2023-05-23 | 2305.14201 | 5 | Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks | We introduce Goat, a fine-tuned LLaMA model that significantly outperforms
GPT-4 on a range of arithmetic tasks. Fine-tuned on a synthetically generated
dataset, Goat achieves state-of-the-art performance on BIG-bench arithmetic
sub-task. In particular, the zero-shot Goat-7B matches or even surpasses the
accuracy achieved by the few-shot PaLM-540B. Surprisingly, Goat can achieve
near-perfect accuracy on large-number addition and subtraction through
supervised fine-tuning only, which is almost impossible with previous
pretrained language models, such as Bloom, OPT, GPT-NeoX, etc. We attribute
Goat's exceptional performance to LLaMA's consistent tokenization of numbers.
To tackle more challenging tasks like large-number multiplication and division,
we propose an approach that classifies tasks based on their learnability, and
subsequently decomposes unlearnable tasks, such as multi-digit multiplication
and division, into a series of learnable tasks by leveraging basic arithmetic
principles. We thoroughly examine the performance of our model, offering a
comprehensive evaluation of the effectiveness of our proposed decomposition
steps. Additionally, Goat-7B can be easily trained using LoRA on a 24GB VRAM
GPU, facilitating reproducibility for other researchers. We release our model,
dataset, and the Python script for dataset generation. | https://huggingface.co/papers/2305.14201 |
2023-05-23 | 2305.13534 | 3 | How Language Model Hallucinations Can Snowball | A major risk of using language models in practical applications is their
tendency to hallucinate incorrect statements. Hallucinations are often
attributed to knowledge gaps in LMs, but we hypothesize that in some cases,
when justifying previously generated hallucinations, LMs output false claims
that they can separately recognize as incorrect. We construct three
question-answering datasets where ChatGPT and GPT-4 often state an incorrect
answer and offer an explanation with at least one incorrect claim. Crucially,
we find that ChatGPT and GPT-4 can identify 67% and 87% of their own mistakes,
respectively. We refer to this phenomenon as hallucination snowballing: an LM
over-commits to early mistakes, leading to more mistakes that it otherwise
would not make. | https://huggingface.co/papers/2305.13534 |
2023-05-23 | 2305.13009 | 3 | Textually Pretrained Speech Language Models | Speech language models (SpeechLMs) process and generate acoustic data only,
without textual supervision. In this work, we propose TWIST, a method for
training SpeechLMs using a warm-start from a pretrained textual language
models. We show using both automatic and human evaluations that TWIST
outperforms a cold-start SpeechLM across the board. We empirically analyze the
effect of different model design choices such as the speech tokenizer, the
pretrained textual model, and the dataset size. We find that model and dataset
scale both play an important role in constructing better-performing SpeechLMs.
Based on our observations, we present the largest (to the best of our
knowledge) SpeechLM both in terms of number of parameters and training data. We
additionally introduce two spoken versions of the StoryCloze textual benchmark
to further improve model evaluation and advance future research in the field.
We make speech samples, code and models publicly available:
https://pages.cs.huji.ac.il/adiyoss-lab/twist/ . | https://huggingface.co/papers/2305.13009 |
2023-05-23 | 2305.13304 | 2 | RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text | The fixed-size context of Transformer makes GPT models incapable of
generating arbitrarily long text. In this paper, we introduce RecurrentGPT, a
language-based simulacrum of the recurrence mechanism in RNNs. RecurrentGPT is
built upon a large language model (LLM) such as ChatGPT and uses natural
language to simulate the Long Short-Term Memory mechanism in an LSTM. At each
timestep, RecurrentGPT generates a paragraph of text and updates its
language-based long-short term memory stored on the hard drive and the prompt,
respectively. This recurrence mechanism enables RecurrentGPT to generate texts
of arbitrary length without forgetting. Since human users can easily observe
and edit the natural language memories, RecurrentGPT is interpretable and
enables interactive generation of long text. RecurrentGPT is an initial step
towards next-generation computer-assisted writing systems beyond local editing
suggestions. In addition to producing AI-generated content (AIGC), we also
demonstrate the possibility of using RecurrentGPT as an interactive fiction
that directly interacts with consumers. We call this usage of generative models
by ``AI As Contents'' (AIAC), which we believe is the next form of conventional
AIGC. We further demonstrate the possibility of using RecurrentGPT to create
personalized interactive fiction that directly interacts with readers instead
of interacting with writers. More broadly, RecurrentGPT demonstrates the
utility of borrowing ideas from popular model designs in cognitive science and
deep learning for prompting LLMs. Our code is available at
https://github.com/aiwaves-cn/RecurrentGPT and an online demo is available at
https://www.aiwaves.org/recurrentgpt. | https://huggingface.co/papers/2305.13304 |
2023-05-23 | 2305.12050 | 2 | CodeCompose: A Large-Scale Industrial Deployment of AI-assisted Code
Authoring | The rise of large language models (LLMs) has unlocked various applications of
this technology in software development. In particular, generative LLMs have
been shown to effectively power AI-based code authoring tools that can suggest
entire statements or blocks of code during code authoring. In this paper we
present CodeCompose, an AI-assisted code authoring tool developed and deployed
at Meta internally. CodeCompose is based on the InCoder LLM that merges
generative capabilities with bi-directionality. We have scaled up CodeCompose
to serve tens of thousands of developers at Meta, across 10+ programming
languages and several coding surfaces.
We discuss unique challenges in terms of user experience and metrics that
arise when deploying such tools in large-scale industrial settings. We present
our experience in making design decisions about the model and system
architecture for CodeCompose that addresses these challenges. Finally, we
present metrics from our large-scale deployment of CodeCompose that shows its
impact on Meta's internal code authoring experience over a 15-day time window,
where 4.5 million suggestions were made by CodeCompose. Quantitative metrics
reveal that (i) CodeCompose has an acceptance rate of 22% across several
languages, and (ii) 8% of the code typed by users of CodeCompose is through
accepting code suggestions from CodeCompose. Qualitative feedback indicates an
overwhelming 91.5% positive reception for CodeCompose. In addition to assisting
with code authoring, CodeCompose is also introducing other positive side
effects such as encouraging developers to generate more in-code documentation,
helping them with the discovery of new APIs, etc. | https://huggingface.co/papers/2305.12050 |
2023-05-23 | 2305.13786 | 1 | Perception Test: A Diagnostic Benchmark for Multimodal Video Models | We propose a novel multimodal video benchmark - the Perception Test - to
evaluate the perception and reasoning skills of pre-trained multimodal models
(e.g. Flamingo, BEiT-3, or GPT-4). Compared to existing benchmarks that focus
on computational tasks (e.g. classification, detection or tracking), the
Perception Test focuses on skills (Memory, Abstraction, Physics, Semantics) and
types of reasoning (descriptive, explanatory, predictive, counterfactual)
across video, audio, and text modalities, to provide a comprehensive and
efficient evaluation tool. The benchmark probes pre-trained models for their
transfer capabilities, in a zero-shot / few-shot or limited finetuning regime.
For these purposes, the Perception Test introduces 11.6k real-world videos, 23s
average length, designed to show perceptually interesting situations, filmed by
around 100 participants worldwide. The videos are densely annotated with six
types of labels (multiple-choice and grounded video question-answers, object
and point tracks, temporal action and sound segments), enabling both language
and non-language evaluations. The fine-tuning and validation splits of the
benchmark are publicly available (CC-BY license), in addition to a challenge
server with a held-out test split. Human baseline results compared to
state-of-the-art video QA models show a significant gap in performance (91.4%
vs 43.6%), suggesting that there is significant room for improvement in
multimodal video understanding.
Dataset, baselines code, and challenge server are available at
https://github.com/deepmind/perception_test | https://huggingface.co/papers/2305.13786 |
2023-05-23 | 2305.13735 | 1 | Aligning Large Language Models through Synthetic Feedback | Aligning large language models (LLMs) to human values has become increasingly
important as it enables sophisticated steering of LLMs, e.g., making them
follow given instructions while keeping them less toxic. However, it requires a
significant amount of human demonstrations and feedback. Recently, open-sourced
models have attempted to replicate the alignment learning process by distilling
data from already aligned LLMs like InstructGPT or ChatGPT. While this process
reduces human efforts, constructing these datasets has a heavy dependency on
the teacher models. In this work, we propose a novel framework for alignment
learning with almost no human labor and no dependency on pre-aligned LLMs.
First, we perform reward modeling (RM) with synthetic feedback by contrasting
responses from vanilla LLMs with various sizes and prompts. Then, we use the RM
for simulating high-quality demonstrations to train a supervised policy and for
further optimizing the model with reinforcement learning. Our resulting model,
Aligned Language Model with Synthetic Training dataset (ALMoST), outperforms
open-sourced models, including Alpaca, Dolly, and OpenAssistant, which are
trained on the outputs of InstructGPT or human-annotated instructions. Our
7B-sized model outperforms the 12-13B models in the A/B tests using GPT-4 as
the judge with about 75% winning rate on average. | https://huggingface.co/papers/2305.13735 |
2023-05-23 | 2305.12487 | 1 | Augmenting Autotelic Agents with Large Language Models | Humans learn to master open-ended repertoires of skills by imagining and
practicing their own goals. This autotelic learning process, literally the
pursuit of self-generated (auto) goals (telos), becomes more and more
open-ended as the goals become more diverse, abstract and creative. The
resulting exploration of the space of possible skills is supported by an
inter-individual exploration: goal representations are culturally evolved and
transmitted across individuals, in particular using language. Current
artificial agents mostly rely on predefined goal representations corresponding
to goal spaces that are either bounded (e.g. list of instructions), or
unbounded (e.g. the space of possible visual inputs) but are rarely endowed
with the ability to reshape their goal representations, to form new
abstractions or to imagine creative goals. In this paper, we introduce a
language model augmented autotelic agent (LMA3) that leverages a pretrained
language model (LM) to support the representation, generation and learning of
diverse, abstract, human-relevant goals. The LM is used as an imperfect model
of human cultural transmission; an attempt to capture aspects of humans'
common-sense, intuitive physics and overall interests. Specifically, it
supports three key components of the autotelic architecture: 1)~a relabeler
that describes the goals achieved in the agent's trajectories, 2)~a goal
generator that suggests new high-level goals along with their decomposition
into subgoals the agent already masters, and 3)~reward functions for each of
these goals. Without relying on any hand-coded goal representations, reward
functions or curriculum, we show that LMA3 agents learn to master a large
diversity of skills in a task-agnostic text-based environment. | https://huggingface.co/papers/2305.12487 |
2023-05-23 | 2305.12001 | 1 | OPT-R: Exploring the Role of Explanations in Finetuning and Prompting
for Reasoning Skills of Large Language Models | In this paper, we conduct a thorough investigation into the reasoning
capabilities of Large Language Models (LLMs), focusing specifically on the Open
Pretrained Transformers (OPT) models as a representative of such models. Our
study entails finetuning three different sizes of OPT on a carefully curated
reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned
without explanations, and OPT-RE, finetuned with explanations. We then evaluate
all models on 57 out-of-domain tasks drawn from the SUPER-NATURALINSTRUCTIONS
benchmark, covering 26 distinct reasoning skills, utilizing three prompting
techniques. Through a comprehensive grid of 27 configurations and 6,156 test
evaluations, we investigate the dimensions of finetuning, prompting, and scale
to understand the role of explanations on different reasoning skills. Our
findings reveal that having explanations in the fewshot exemplar has no
significant impact on the model's performance when the model is finetuned,
while positively affecting the non-finetuned counterpart. Moreover, we observe
a slight yet consistent increase in classification accuracy as we incorporate
explanations during prompting and finetuning, respectively. Finally, we offer
insights on which skills benefit the most from incorporating explanations
during finetuning and prompting, such as Numerical (+20.4%) and Analogical
(+13.9%) reasoning, as well as skills that exhibit negligible or negative
effects. | https://huggingface.co/papers/2305.12001 |
2023-05-23 | 2305.11938 | 1 | XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented
Languages | Data scarcity is a crucial issue for the development of highly multilingual
NLP systems. Yet for many under-represented languages (ULs) -- languages for
which NLP re-search is particularly far behind in meeting user needs -- it is
feasible to annotate small amounts of data. Motivated by this, we propose
XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather
than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by
speakers of high-resource languages; and its focus on under-represented
languages where this scarce-data scenario tends to be most realistic. XTREME-UP
evaluates the capabilities of language models across 88 under-represented
languages over 9 key user-centric technologies including ASR, OCR, MT, and
information access tasks that are of general utility. We create new datasets
for OCR, autocomplete, semantic parsing, and transliteration, and build on and
refine existing datasets for other tasks. XTREME-UP provides methodology for
evaluating many modeling scenarios including text-only, multi-modal (vision,
audio, and text),supervised parameter tuning, and in-context learning. We
evaluate commonly used models on the benchmark. We release all code and scripts
to train and evaluate models | https://huggingface.co/papers/2305.11938 |
2023-05-24 | 2305.13840 | 4 | Control-A-Video: Controllable Text-to-Video Generation with Diffusion
Models | This paper presents a controllable text-to-video (T2V) diffusion model, named
Video-ControlNet, that generates videos conditioned on a sequence of control
signals, such as edge or depth maps. Video-ControlNet is built on a pre-trained
conditional text-to-image (T2I) diffusion model by incorporating a
spatial-temporal self-attention mechanism and trainable temporal layers for
efficient cross-frame modeling. A first-frame conditioning strategy is proposed
to facilitate the model to generate videos transferred from the image domain as
well as arbitrary-length videos in an auto-regressive manner. Moreover,
Video-ControlNet employs a novel residual-based noise initialization strategy
to introduce motion prior from an input video, producing more coherent videos.
With the proposed architecture and strategies, Video-ControlNet can achieve
resource-efficient convergence and generate superior quality and consistent
videos with fine-grained control. Extensive experiments demonstrate its success
in various video generative tasks such as video editing and video style
transfer, outperforming previous methods in terms of consistency and quality.
Project Page: https://controlavideo.github.io/ | https://huggingface.co/papers/2305.13840 |
2023-05-24 | 2305.13579 | 3 | Enhancing Detail Preservation for Customized Text-to-Image Generation: A
Regularization-Free Approach | Recent text-to-image generation models have demonstrated impressive
capability of generating text-aligned images with high fidelity. However,
generating images of novel concept provided by the user input image is still a
challenging task. To address this problem, researchers have been exploring
various methods for customizing pre-trained text-to-image generation models.
Currently, most existing methods for customizing pre-trained text-to-image
generation models involve the use of regularization techniques to prevent
over-fitting. While regularization will ease the challenge of customization and
leads to successful content creation with respect to text guidance, it may
restrict the model capability, resulting in the loss of detailed information
and inferior performance. In this work, we propose a novel framework for
customized text-to-image generation without the use of regularization.
Specifically, our proposed framework consists of an encoder network and a novel
sampling method which can tackle the over-fitting problem without the use of
regularization. With the proposed framework, we are able to customize a
large-scale text-to-image generation model within half a minute on single GPU,
with only one image provided by the user. We demonstrate in experiments that
our proposed framework outperforms existing methods, and preserves more
fine-grained details. | https://huggingface.co/papers/2305.13579 |
2023-05-25 | 2305.15038 | 5 | Is GPT-4 a Good Data Analyst? | As large language models (LLMs) have demonstrated their powerful capabilities
in plenty of domains and tasks, including context understanding, code
generation, language generation, data storytelling, etc., many data analysts
may raise concerns if their jobs will be replaced by AI. This controversial
topic has drawn a lot of attention in public. However, we are still at a stage
of divergent opinions without any definitive conclusion. Motivated by this, we
raise the research question of "is GPT-4 a good data analyst?" in this work and
aim to answer it by conducting head-to-head comparative studies. In detail, we
regard GPT-4 as a data analyst to perform end-to-end data analysis with
databases from a wide range of domains. We propose a framework to tackle the
problems by carefully designing the prompts for GPT-4 to conduct experiments.
We also design several task-specific evaluation metrics to systematically
compare the performance between several professional human data analysts and
GPT-4. Experimental results show that GPT-4 can achieve comparable performance
to humans. We also provide in-depth discussions about our results to shed light
on further studies before we reach the conclusion that GPT-4 can replace data
analysts. | https://huggingface.co/papers/2305.15038 |
2023-05-25 | 2305.14540 | 2 | LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond | With the recent appearance of LLMs in practical settings, having methods that
can effectively detect factual inconsistencies is crucial to reduce the
propagation of misinformation and improve trust in model outputs. When testing
on existing factual consistency benchmarks, we find that a few large language
models (LLMs) perform competitively on classification benchmarks for factual
inconsistency detection compared to traditional non-LLM methods. However, a
closer analysis reveals that most LLMs fail on more complex formulations of the
task and exposes issues with existing evaluation benchmarks, affecting
evaluation precision. To address this, we propose a new protocol for
inconsistency detection benchmark creation and implement it in a 10-domain
benchmark called SummEdits. This new benchmark is 20 times more cost-effective
per sample than previous benchmarks and highly reproducible, as we estimate
inter-annotator agreement at about 0.9. Most LLMs struggle on SummEdits, with
performance close to random chance. The best-performing model, GPT-4, is still
8\% below estimated human performance, highlighting the gaps in LLMs' ability
to reason about facts and detect inconsistencies when they occur. | https://huggingface.co/papers/2305.14540 |
2023-05-25 | 2305.15486 | 1 | SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and
Reasoning | Open-world survival games pose significant challenges for AI algorithms due
to their multi-tasking, deep exploration, and goal prioritization requirements.
Despite reinforcement learning (RL) being popular for solving games, its high
sample complexity limits its effectiveness in complex open-world games like
Crafter or Minecraft. We propose a novel approach, SPRING, to read the game's
original academic paper and use the knowledge learned to reason and play the
game through a large language model (LLM). Prompted with the LaTeX source as
game context and a description of the agent's current observation, our SPRING
framework employs a directed acyclic graph (DAG) with game-related questions as
nodes and dependencies as edges. We identify the optimal action to take in the
environment by traversing the DAG and calculating LLM responses for each node
in topological order, with the LLM's answer to final node directly translating
to environment actions. In our experiments, we study the quality of in-context
"reasoning" induced by different forms of prompts under the setting of the
Crafter open-world environment. Our experiments suggest that LLMs, when
prompted with consistent chain-of-thought, have great potential in completing
sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4
outperforms all state-of-the-art RL baselines, trained for 1M steps, without
any training. Finally, we show the potential of games as a test bed for LLMs. | https://huggingface.co/papers/2305.15486 |
2023-05-25 | 2305.14878 | 1 | Leveraging GPT-4 for Automatic Translation Post-Editing | While Neural Machine Translation (NMT) represents the leading approach to
Machine Translation (MT), the outputs of NMT models still require translation
post-editing to rectify errors and enhance quality, particularly under critical
settings. In this work, we formalize the task of translation post-editing with
Large Language Models (LLMs) and explore the use of GPT-4 to automatically
post-edit NMT outputs across several language pairs. Our results demonstrate
that GPT-4 is adept at translation post-editing and produces meaningful edits
even when the target language is not English. Notably, we achieve
state-of-the-art performance on WMT-22 English-Chinese, English-German,
Chinese-English and German-English language pairs using GPT-4 based
post-editing, as evaluated by state-of-the-art MT quality metrics. | https://huggingface.co/papers/2305.14878 |
2023-05-25 | 2305.14564 | 1 | PEARL: Prompting Large Language Models to Plan and Execute Actions Over
Long Documents | Strategies such as chain-of-thought prompting improve the performance of
large language models (LLMs) on complex reasoning tasks by decomposing input
examples into intermediate steps. However, it remains unclear how to apply such
methods to reason over long input documents, in which both the decomposition
and the output of each intermediate step are non-trivial to obtain. In this
work, we propose PEARL, a prompting framework to improve reasoning over long
documents, which consists of three stages: action mining, plan formulation, and
plan execution. More specifically, given a question about a long document,
PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,
FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain
the answer. Each stage of PEARL is implemented via zero-shot or few-shot
prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate
PEARL on a challenging subset of the QuALITY dataset, which contains questions
that require complex reasoning over long narrative texts. PEARL outperforms
zero-shot and chain-of-thought prompting on this dataset, and ablation
experiments show that each stage of PEARL is critical to its performance.
Overall, PEARL is a first step towards leveraging LLMs to reason over long
documents. | https://huggingface.co/papers/2305.14564 |
2023-05-26 | 2305.16291 | 11 | Voyager: An Open-Ended Embodied Agent with Large Language Models | We introduce Voyager, the first LLM-powered embodied lifelong learning agent
in Minecraft that continuously explores the world, acquires diverse skills, and
makes novel discoveries without human intervention. Voyager consists of three
key components: 1) an automatic curriculum that maximizes exploration, 2) an
ever-growing skill library of executable code for storing and retrieving
complex behaviors, and 3) a new iterative prompting mechanism that incorporates
environment feedback, execution errors, and self-verification for program
improvement. Voyager interacts with GPT-4 via blackbox queries, which bypasses
the need for model parameter fine-tuning. The skills developed by Voyager are
temporally extended, interpretable, and compositional, which compounds the
agent's abilities rapidly and alleviates catastrophic forgetting. Empirically,
Voyager shows strong in-context lifelong learning capability and exhibits
exceptional proficiency in playing Minecraft. It obtains 3.3x more unique
items, travels 2.3x longer distances, and unlocks key tech tree milestones up
to 15.3x faster than prior SOTA. Voyager is able to utilize the learned skill
library in a new Minecraft world to solve novel tasks from scratch, while other
techniques struggle to generalize. We open-source our full codebase and prompts
at https://voyager.minedojo.org/. | https://huggingface.co/papers/2305.16291 |
2023-05-26 | 2305.16213 | 9 | ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with
Variational Score Distillation | Score distillation sampling (SDS) has shown great promise in text-to-3D
generation by distilling pretrained large-scale text-to-image diffusion models,
but suffers from over-saturation, over-smoothing, and low-diversity problems.
In this work, we propose to model the 3D parameter as a random variable instead
of a constant as in SDS and present variational score distillation (VSD), a
principled particle-based variational framework to explain and address the
aforementioned issues in text-to-3D generation. We show that SDS is a special
case of VSD and leads to poor samples with both small and large CFG weights. In
comparison, VSD works well with various CFG weights as ancestral sampling from
diffusion models and simultaneously improves the diversity and sample quality
with a common CFG weight (i.e., 7.5). We further present various improvements
in the design space for text-to-3D such as distillation time schedule and
density initialization, which are orthogonal to the distillation algorithm yet
not well explored. Our overall approach, dubbed ProlificDreamer, can generate
high rendering resolution (i.e., 512times512) and high-fidelity NeRF with
rich structure and complex effects (e.g., smoke and drops). Further,
initialized from NeRF, meshes fine-tuned by VSD are meticulously detailed and
photo-realistic. Project page: https://ml.cs.tsinghua.edu.cn/prolificdreamer/ | https://huggingface.co/papers/2305.16213 |
2023-05-26 | 2305.15717 | 5 | The False Promise of Imitating Proprietary LLMs | An emerging method to cheaply improve a weaker language model is to finetune
it on outputs from a stronger model, such as a proprietary system like ChatGPT
(e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply
imitate the proprietary model's capabilities using a weaker open-source model.
In this work, we critically analyze this approach. We first finetune a series
of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data
sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the
models using crowd raters and canonical NLP benchmarks. Initially, we were
surprised by the output quality of our imitation models -- they appear far
better at following instructions, and crowd workers rate their outputs as
competitive with ChatGPT. However, when conducting more targeted automatic
evaluations, we find that imitation models close little to none of the gap from
the base LM to ChatGPT on tasks that are not heavily supported in the imitation
data. We show that these performance discrepancies may slip past human raters
because imitation models are adept at mimicking ChatGPT's style but not its
factuality. Overall, we conclude that model imitation is a false promise: there
exists a substantial capabilities gap between open and closed LMs that, with
current methods, can only be bridged using an unwieldy amount of imitation data
or by using more capable base LMs. In turn, we argue that the highest leverage
action for improving open-source models is to tackle the difficult challenge of
developing better base LMs, rather than taking the shortcut of imitating
proprietary systems. | https://huggingface.co/papers/2305.15717 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.