arxiv_id
stringlengths 10
12
| abstract
stringlengths 459
2.2k
|
---|---|
2408.16673 | Large language models rely on Supervised Fine-Tuning (SFT) to specialize in downstream tasks. Cross Entropy (CE) loss is the de facto choice in SFT, but it often leads to overfitting and limited output diversity due to its aggressive updates to the data distribution. This paper aim to address these issues by introducing the maximum entropy principle, which favors models with flatter distributions that still effectively capture the data. Specifically, we develop a new distribution matching method called GEM, which solves reverse Kullback-Leibler divergence minimization with an entropy regularizer. For the SFT of Llama-3-8B models, GEM outperforms CE in several aspects. First, when applied to the UltraFeedback dataset to develop general instruction-following abilities, GEM exhibits reduced overfitting, evidenced by lower perplexity and better performance on the IFEval benchmark. Furthermore, GEM enhances output diversity, leading to performance gains of up to 7 points on math reasoning and code generation tasks using best-of-n sampling, even without domain-specific data. Second, when finetuning with domain-specific datasets for math reasoning and code generation, GEM also shows less overfitting and improvements of up to 10 points compared with CE. |
2408.16293 | Language models have demonstrated remarkable performance in solving reasoning tasks; however, even the strongest models still occasionally make reasoning mistakes. Recently, there has been active research aimed at improving reasoning accuracy, particularly by using pretrained language models to "self-correct" their mistakes via multi-round prompting. In this paper, we follow this line of work but focus on understanding the usefulness of incorporating "errorcorrection" data directly into the pretraining stage. This data consists of erroneous solution steps immediately followed by their corrections. Using a synthetic math dataset, we show promising results: this type of pretrain data can help language models achieve higher reasoning accuracy directly (i.e., through simple auto-regression, without multi-round prompting) compared to pretraining on the same amount of error-free data. We also delve into many details, such as (1) how this approach differs from beam search, (2) how such data can be prepared, (3) whether masking is needed on the erroneous tokens, (4) the amount of error required, (5) whether such data can be deferred to the fine-tuning stage, and many others. |
2312.07104 | Large language models (LLMs) are increasingly used for complex tasks requiring multiple chained generation calls, advanced prompting techniques, control flow, and interaction with external environments. However, efficient systems for programming and executing these applications are lacking.
To bridge this gap, we introduce SGLang, aStructuredGenerationLanguage for LLMs. SGLang is designed for the efficient programming of LLMs and incorporates primitives for common LLM programming patterns. We have implemented SGLang as a domain-specific language embedded in Python, and we developed an interpreter, a compiler, and a high-performance runtime for SGLang.
These components work together to enable optimizations such as parallelism, batching, caching, sharing, and other compilation techniques. Additionally, we propose RadixAttention, a novel technique that maintains a Least Recently Used (LRU) cache of the Key-Value (KV) cache for all requests in a radix tree, enabling automatic KV cache reuse across multiple generation calls at runtime. SGLang simplifies the writing of LLM programs and boosts execution efficiency.
Our experiments demonstrate that SGLang can speed up common LLM tasks by up to, while reducing code complexity and enhancing control. |
2408.13359 | Finding the optimal learning rate for language model pretraining is a challenging task. This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters. Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored. In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler. After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes. Based on the observation, we propose a new learning rate scheduler, *Power scheduler*, that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler with Maximum Update Parameterization (µP) can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture. Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models. We open source these pretrained models at https://ibm.biz/BdKhLa. |
2408.12857 | Recently, a wide range of memory-efficient LLM training algorithms have gained substantial popularity. These methods leverage the low-rank structure of gradients to project optimizer states into a subspace using projection matrix found by singular value decomposition (SVD). However, convergence of these algorithms is highly dependent on the update rules of their projection matrix. In this work, we provide the *first* convergence guarantee for arbitrary update rules of projection matrix. This guarantee is generally applicable to optimizers that can be analyzed with Hamiltonian Descent, including most common ones, such as LION, Adam. Inspired by our theoretical understanding, we propose Online Subspace Descent, a new family of subspace descent optimizer without SVD. Instead of updating the projection matrix with eigenvectors, Online Subspace Descent updates the projection matrix with online PCA. Online Subspace Descent is flexible and introduces only minimum overhead to training. We show that for the task of pretraining LLaMA models ranging from 60M to 7B parameters on the C4 dataset, Online Subspace Descent achieves lower perplexity and better downstream tasks performance than state-ofthe-art low-rank training methods across different settings and narrows the gap with full-rank baselines.1 |
2405.12981 | Key-value (KV) caching plays an essential role in accelerating decoding for transformer-based autoregressive large language models (LLMs) . However, the amount of memory required to store the KV cache can become prohibitive at long sequence lengths and large batch sizes. Since the invention of the transformer, two of the most effective interventions discovered for reducing the size of the KV cache have been Multi-Query Attention (MQA) and its generalization, Grouped-Query Attention (GQA) . MQA and GQA both modify the design of the attention block so that multiple query heads can share a single key/value head, reducing the number of distinct key/value heads by a large factor while only minimally degrading accuracy. In this paper, we show that it is possible to take Multi-Query Attention a step further by also sharing key and value heads between adjacent layers, yielding a new attention design we call Cross-Layer Attention (CLA) . With CLA, we find that it is possible to reduce the size of the KV cache by anotherwhile maintaining nearly the same accuracy as unmodified MQA. In experiments training 1B- and 3B-parameter models from scratch, we demonstrate that CLA provides a Pareto improvement over the memory/accuracy tradeoffs which are possible with traditional MQA, enabling inference with longer sequence lengths and larger batch sizes than would otherwise be possible. |
2105.11921 | Professional summaries are written with document-level information, such as the theme of the document, in mind. This is in contrast with most seq2seq decoders which simultaneously learn to focus on salient content, while deciding what to generate, at each decoding step. With the motivation to narrow this gap, we introduce Focus Attention Mechanism, a simple yet effective method to encourage decoders to proactively generate tokens that are similar or topical to the input document. Further, we propose a Focus Sampling method to enable generation of diverse summaries, an area currently understudied in summarization. When evaluated on the BBC extreme summarization task, two state-of-the-art models augmented with Focus Attention generate summaries that are closer to the target and more faithful to their input documents,
outperforming their vanilla counterparts onrougeand multiple faithfulness measures.
We also empirically demonstrate that Focus Sampling is more effective in generating diverse and faithful summaries than top-or nucleus sampling-based decoding methods. |
2407.01082 | Large Language Models (LLMs) generate longform text by successively sampling the next token based on the probability distribution of the token vocabulary at each decoding step. Current popular truncation sampling methods such as top-sampling, also known as nucleus sampling, often struggle to balance coherence and creativity in generating text, particularly when using higher temperatures. To address this issue, we propose min-, a dynamic truncation sampling method, that establishes a minimum base percentage threshold for tokens, which the scales according to the probability of the top candidate token. Through experiments on several benchmarks, such as GPQA, GSM8K and AlpacaEval Creative Writing, we demonstrate that min-improves the coherence and quality of generated text even at high temperatures, while also facilitating more creative and diverse outputs compared to top-and other sampling methods. As of writing, min-has been adopted by multiple open-source LLM implementations, and have been independently assessed by members of the open-source LLM community, further validating its practical utility and potential111Code, evaluation code and results available athttps://github.com/menhguin/minp_paper/. |
2406.01436 | Knowledge editing is a rising technique for efficiently updating factual knowledge in Large Language Models (LLMs) with minimal alteration of parameters. However, recent studies have identified concerning side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing. This survey presents a comprehensive study of these side effects, providing a unified view of the challenges associated with knowledge editing in LLMs. We discuss related works and summarize potential research directions to overcome these limitations. Our work highlights the limitations of current knowledge editing methods, emphasizing the need for deeper understanding of inner knowledge structures of LLMs and improved knowledge editing methods. To foster future research, we have released the complementary materials such as paper collection publicly111https://github.com/MiuLab/EditLLM-Survey.**footnotetext:Equal contribution. |
2401.17585 | Current approaches of knowledge editing struggle to effectively propagate updates to interconnected facts.
In this work, we delve into the barriers that hinder the appropriate propagation of updated knowledge within these models for accurate reasoning.
To support our analysis, we introduce a novel reasoning-based benchmark –ReCoE(Reasoning-basedCounterfactualEditing dataset) – which covers six common reasoning schemes in real world.
We conduct a thorough analysis of existing knowledge editing techniques, including input-augmentation, finetuning, and locate-and-edit.
We found that all model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
Our analysis over the chain-of-thought generation of edited models further uncover key reasons behind the inadequacy of existing knowledge editing methods from a reasoning standpoint, involving aspects onfact-wise editing,fact recallability, andcoherencein generation. We will make our benchmark publicly available. |
2402.11078 | Fine-tuning is dismissed as not effective for model editing
due to its poor performance compared to more specialized methods.
However, fine-tuning is simple, agnostic to the architectural details of the model being edited,
and able to leverage ongoing advances in standard training methods (e.g., PEFT) ,
making it an appealing choice for a model editor.
In this work, we show that pure fine-tuning can be a viable approach to model editing.
We propose a slight modification of naive fine-tuning with two key ingredients.
First, we optimize the conditional likelihood rather than the full likelihood.
Second, we augment the data with random paraphrases and facts to encourage generalization and locality.
Our experiments on ZsRE andCounterFactshow that this simple modification
allows fine-tuning to often match or outperform specialized editors in the edit score. |
1704.04368 | Neural sequence-to-sequence models have provided a viable new approach
forabstractivetext summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text) .
However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves.
In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways.
First, we use a hybrid pointer-generator network that can copy words from the source text viapointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through thegenerator.
Second, we usecoverageto keep track of what has been summarized, which discourages repetition.
We apply our model to theCNN / Daily Mailsummarization
task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. |
2407.18940 | Literature search questions, such as “where can I find research on the evaluation of consistency in generated summaries?” pose significant challenges for modern search engines and retrieval systems. These questions often require a deep understanding of research concepts and the ability to reason over entire articles. In this work, we introduce LitSearch, a retrieval benchmark comprising 597 realistic literature search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and
(2) questions about recently published papers, manually written by their authors.
All LitSearch questions were manually examined or edited by experts to ensure high quality. We extensively benchmark state-of-the-art retrieval models and also evaluate two LLM-based reranking pipelines.
We find a significant performance gap between BM25 and state-of-the-art dense retrievers, with a 24.8% difference in absolute recall@5. The LLM-based reranking strategies further improve the best-performing dense retriever by 4.4%. Additionally, commercial search engines and research tools like Google Search perform poorly on LitSearch, lagging behind the best dense retriever by 32 points. Taken together, these results show that LitSearch is an informative new testbed for retrieval systems while catering to a real-world use case.111Our dataset and code are available athttps://github.com/princeton-nlp/LitSearch. |
2403.03853 | As Large Language Models (LLMs) continue to advance in performance, their size has escalated significantly, with current LLMs containing billions or even trillions of parameters. However, in this study, we discovered that many layers of LLMs exhibit high similarity, and some layers play a negligible role in network functionality. Based on this observation, we define a metric called Block Influence (BI) to gauge the significance of each layer in LLMs. We then propose a straightforward pruning approach: layer removal, in which we directly delete the redundant layers in LLMs based on their BI scores. Experiments demonstrate that our method, which we call ShortGPT, significantly outperforms previous state-of-the-art (SOTA) methods in model pruning. Moreover, ShortGPT is orthogonal to quantization-like methods, enabling further reduction in parameters and computation. The ability to achieve better results through simple layer removal, as opposed to more complex pruning techniques, suggests a high degree of redundancy in the model architecture. |
2406.16797 | Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights–causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time.
To mitigate this, we propose Lottery Ticket Adaptation (LoTA) , a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA) , and maintains good performance even after training on other tasks – thus, avoiding catastrophic forgetting. By extracting and fine-tuning overlottery tickets(orsparse task vectors) , LoTA also enables model merging over highly dissimilar tasks. Our code is madepublicly available.111A preliminary version of this work has appeared in. |
2407.21417 | Modern language models (LMs) need to follow human instructions while being faithful; yet, they often fail to achieve both.
Here, we provide concrete evidence of a trade-off between instruction following (i.e., follow open-ended instructions) and faithfulness (i.e., ground responses in given context) when training LMs with these objectives. For instance, fine-tuning LLaMA-7B on instruction following datasets renders it less faithful. Conversely, instruction-tuned Vicuna-7B shows degraded performance at following instructions when further optimized on tasks that require contextual grounding. One common remedy is multi-task learning (MTL) with data mixing, yet it remains far from achieving a synergic outcome. We propose a simple yet effective method that relies onRejection Sampling for ContinuedSelf-instructionTuning (ReSet) , which significantly outperforms vanilla MTL. Surprisingly, we find that less is more, as trainingReSetwith high-quality, yet substantially smaller data (three-fold less) yields superior results. Our findings offer a better understanding of objective discrepancies in alignment training of LMs. |
2408.08274 | The Mixture of Experts (MoE) framework has become a popular architecture for large language models due to its superior performance over dense models. However, training MoEs from scratch in a large-scale regime is prohibitively expensive. Existing methods mitigate this by pre-training multiple dense expert models independently and using them to initialize an MoE. This is done by using experts' feed-forward network (FFN) to initialize the MoE's experts while merging other parameters. However, this method limits the reuse of dense model parameters to only the FFN layers, thereby constraining the advantages when "upcycling" these models into MoEs. We propose BAM (Branch-Attend-Mix), a simple yet effective method that addresses this shortcoming. BAM makes full use of specialized dense models by not only using their FFN to initialize the MoE layers but also leveraging experts' attention parameters fully by initializing them into a soft-variant of Mixture of Attention (MoA) layers. We explore two methods for upcycling attention parameters: 1) initializing separate attention experts from dense models including all attention parameters for the best model performance; and 2) sharing key and value parameters across all experts to facilitate for better inference efficiency. To further improve efficiency, we adopt a parallel attention transformer architecture to MoEs, which allows the attention experts and FFN experts to be computed concurrently. Our experiments on seed models ranging from 590 million to 2 billion parameters demonstrate that BAM surpasses baselines in both perplexity and downstream task performance, within the same computational and data constraints. |
2408.07852 | While many capabilities of *language models* (LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood. Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations where a correct answer appears verbatim in the training set. To fully control the training data content, we construct a *knowledge graph* (KG)-based dataset, and use it to train a set of increasingly large LMs. We find that for a fixed dataset, larger and longer-trained LMs hallucinate less. However, hallucinating on ≤ 5% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, than Hoffmann et al. (2022) reported was optimal. Given this costliness, we study how hallucination detectors depend on scale. While we see detector size improves performance on fixed LM's outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations. |
2202.05262v5 | We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuronactivationsthat are decisive in a model’s factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forwardweightsto update specific factual associations using Rank-One Model Editing (ROME) . We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task. We also evaluate ROME on a new dataset of difficult counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available
athttps://rome.baulab.info/. |
2307.12950v3 | We propose Reinforcement Learning from Contrast Distillation (RLCD) , a method for aligning language models to follow natural language principles without using human feedback.
RLCD trains a preference model using simulated preference pairs that contain both a high-quality and low-quality example, generated using contrasting positive and negative prompts. The preference model is then used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIFand context distillationbaselines across three diverse alignment tasks—harmlessness, helpfulness, and story outline generation—and on both 7B and 30B model scales for preference data simulation. |
2402.17834v1 | We introduce StableLM 2 1.6B, the first in a new generation of our language model series.
In this technical report, we present in detail the data and training procedure leading to the base and instruction-tuned versions of StableLM 2 1.6B.
The weights for both models are available via Hugging Face for anyone to download and use111https://huggingface.co/stabilityai/stablelm-2-1_6b222https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b.
The report contains thorough evaluations of these models, including zero- and few-shot benchmarks, multilingual benchmarks, and the MT benchmark focusing on multi-turn dialogues.
At the time of publishing this report, StableLM 2 1.6B was the state-of-the-art open model under 2B parameters by a significant margin.
Given its appealing small size, we also provide throughput measurements on a number of edge devices.
In addition, we open source several quantized checkpoints and provide their performance metrics compared to the original model. |
2307.07924v5 | Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase.
At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks.
The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available athttps://github.com/OpenBMB/ChatDev. |
2408.07089 | Recent advancements in Chain-of-Thoughts (CoT) and Program-ofThoughts (PoT) methods have greatly enhanced language models' mathematical reasoning capabilities, facilitating their integration into instruction tuning datasets with LLMs. However, existing methods for large-scale dataset creation require substantial seed data and high computational costs for data synthesis, posing significant challenges for scalability. We introduce **InfinityMath** , a scalable instruction tuning dataset for programmatic mathematical reasoning. The construction pipeline emphasizes decoupling numbers from mathematical problems to synthesize number-independent programs, enabling efficient and flexible scaling while minimizing dependency on specific numerical values. Fine-tuning experiments with open-source language and code models, such as Llama2 and CodeLlama, demonstrate the practical benefits of InfinityMath . These fine-tuned models, showed significant relative improvements on both in-domain and out-of-domain benchmarks, ranging from 184.7% to 514.3% on average. Additionally, these models exhibited high robustness on the *GSM8K+* and *MATH+* benchmarks, which are enhanced version of test sets with simply the number variations. InfinityMath ensures that models are more versatile and effective across a broader range of mathematical problems. The data is available at https://huggingface.co/datasets/flagopen/InfinityMATH. |
2404.08819v2 | State-space models (SSMs) have emerged as a potential alternative architecture for building large language models (LLMs) compared to the previously ubiquitous transformer architecture. One theoretical weakness of transformers is that they cannot express certain kinds of sequential computation and state tracking, which SSMs are explicitly designed to address via their close architectural similarity to recurrent neural networks (RNNs) .But do SSMs truly have an advantage (over transformers) in expressive power for state tracking?Surprisingly, the answer is no. Our analysis reveals that the expressive power of SSMs is limited very similarly to transformers: SSMs cannot express computation outside the complexity class. In particular, this means they cannot solve simple state-tracking problems like permutation composition. It follows that SSMs are provably unable to accurately track chess moves with certain notation, evaluate code, or track entities in a long narrative. To supplement our formal analysis, we report experiments showing that Mamba-style SSMs indeed struggle with state tracking. Thus, despite its recurrent formulation, the “state” in an SSM is an illusion: SSMs have similar expressiveness limitations to non-recurrent models like transformers, which may fundamentally limit their ability to solve real-world state-tracking problems. |
2404.18424v2 | The current use of large language models (LLMs) for zero-shot document ranking follows one of two ways: 1) prompt-based re-ranking methods, which require no further training but are feasible for only re-ranking a handful of candidate documents due to the associated computational costs; and 2) unsupervised contrastive trained dense retrieval methods, which can retrieve relevant documents from the entire corpus but require a large amount of paired text data for contrastive training.
In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus. Our method only requires prompts to guide an LLM to generate query and document representations for effective document retrieval. Specifically, we prompt the LLMs to represent a given text using a single word, and then use the last token’s hidden states and the corresponding logits associated to the prediction of the next token to construct a hybrid document retrieval system. The retrieval system harnesses both dense text embedding and sparse bag-of-words representations given by the LLM. Our experimental evaluation on the BEIR zero-shot document retrieval datasets illustrates that this simple prompt-based LLM retrieval method can achieve a similar or higher retrieval effectiveness than state-of-the-art LLM embedding methods that are trained with large amounts of unsupervised data, especially when using a larger LLM.111Code for fully reproducing the results is available athttps://github.com/ielab/PromptReps. |
2404.19553v1 | We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning111The model is noted as Llama-3-8B-Instruct-80K-QLoRA given its max context length during fine-tuning. However, users could apply the model for even longer contexts via extrapolation.. The entire training cycle is super efficient, which takes 8 hours on one 8xA800 (80G) GPU machine. The resulted model exhibits superior performances across a broad range of evaluation tasks, such as NIHS, topic retrieval, and long-context language understanding; meanwhile, it also well preserves the original capability over short contexts. The dramatic context extension is mainly attributed to merely 3.5K synthetic training samples generated by GPT-4 , which indicates the LLMs’ inherent (yet largely underestimated) potential to extend its original context length. In fact, the context length could be extended far beyond 80K with more computation resources. Therefore, the team will publicly release the entire resources (including data, model, data generation pipeline, training code) so as to facilitate the future research from the community:https://github.com/FlagOpen/FlagEmbedding. |
2402.04553v1 | We present a novel approach to accelerate stochastic gradient descent (SGD) by utilizing curvature information obtained from Hessian-vector products or finite differences of parameters and gradients, similar to the BFGS algorithm. Our approach involves two preconditioners: a matrix-free preconditioner and a low-rank approximation preconditioner. We update both preconditioners online using a criterion that is robust to stochastic gradient noise and does not require line search or damping. To preserve the corresponding symmetry or invariance, our preconditioners are constrained to certain connected Lie groups. The Lie group’s equivariance property simplifies the preconditioner fitting process, while its invariance property eliminates the need for damping, which is commonly required in second-order optimizers. As a result, the learning rate for parameter updating and the step size for preconditioner fitting are naturally normalized, and their default values work well in most scenarios. Our proposed approach offers a promising direction for improving the convergence ofSGDwith low computational overhead. We demonstrate that Preconditioned SGD (PSGD) outperforms SoTA on Vision, NLP, and RL tasks across multiple modern deep-learning architectures. We have provided code for reproducingtoyandlarge scale experimentsin this paper. |
2405.07987 | We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned.
Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way.
We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato’s concept of an ideal reality. We term such a representation theplatonic representationand discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis. |
2408.03943 | What do we want from machine intelligence? We envision machines that are not just tools for thought, but *partners* in thought: reasonable, insightful, knowledgeable, reliable, and trustworthy systems that think *with* us. Current artificial intelligence (AI) systems satisfy some of these criteria, some of the time. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called "thought partners," systems built to meet our expectations and complement our limitations. We lay out several modes of collaborative thought in which humans and AI thought partners can engage and propose desiderata for human-compatible thought partnerships. Drawing on motifs from computational cognitive science, we motivate an alternative scaling path for the design of thought partners and ecosystems around their use through a Bayesian lens, whereby the partners we construct actively build and reason over models of the human and world. |
2406.15334 | The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, thismany-shotmultimodal ICL setting has one crucial problem: it is fundamentally limited by the model’s context length set at pretraining. The problem is especially prominent in the multimodal domain, which processes both text and images, requiring additional tokens. This motivates the need for a multimodal method to compress many shots into fewer tokens without finetuning. In this work, we enable LMMs to perform multimodal, many-shot in-context learning by leveraging Multimodal Task Vectors (MTV) —compact implicit representations of in-context examples compressed in the model’s attention heads. Specifically, we first demonstrate the existence of such MTV in LMMs and then leverage these extracted MTV to enable many-shot in-context learning for various vision-and-language tasks. Our experiments suggest that MTV can scale in performance with the number of compressed shots and generalize to similar out-of-domain tasks without additional context length for inference. |
2402.11782 | Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as “is aspartame linked to cancer”.
To resolve these ambiguous queries, one must search through a large range of websites and considerwhich, if any, of this evidence do I find convincing?In this work, we study how LLMs answer this question.
In particular, we constructConflictingQA, a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts (e.g., quantitative results) , argument styles (e.g., appeals to authority) , and answers (YesorNo) . We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions. Overall, we find that current models rely heavily on therelevanceof a website to the query, while largely ignoringstylisticfeatures that humans find important such as whether a text contains scientific references or is written with a neutral tone.
Taken together, these results highlight the importance of RAG corpus quality (e.g., the need to filter misinformation) , and possibly even a shift in how LLMs are trained to better align with human judgements. |
2402.14207 | We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages. This underexplored problem poses new challenges at thepre-writingstage, including how to research the topic and prepare an outline prior to writing.
We proposeSTORM, a writing system for theSynthesis ofTopicOutlines throughRetrieval andMulti-perspective Question Asking.
STORM models the pre-writing stage by (1) discovering diverse perspectives in researching the given topic,
(2) simulating conversations where writers carrying different perspectives pose questions to a topic expert grounded on trusted Internet sources, (3) curating the collected information to create an outline. For evaluation, we curate FreshWiki, a dataset of recent high-quality Wikipedia articles, and formulate outline assessments to evaluate the pre-writing stage.
We further gather feedback from experienced Wikipedia editors. Compared to articles generated by an outline-driven retrieval-augmented baseline, more of STORM’s articles are deemed to be organized (by a 25% absolute increase) and broad in coverage (by 10%) .
The expert feedback also helps identify new challenges for generating grounded long articles, such as source bias transfer and over-association of unrelated facts. |
2408.04682 | Recent large language models (LLMs) advancements sparked a growing research interest in tool assisted LLMs solving real-world challenges, which calls for comprehensive evaluation of tool-use capabilities. While previous works focused on either evaluating over stateless web services (RESTful API), based on a single turn user prompt, or an off-policy dialog trajectory, TOOLSANDBOX1includes stateful tool execution, implicit state dependencies between tools, a built-in user simulator supporting on-policy conversational evaluation and a dynamic evaluation strategy for intermediate and final milestones over an arbitrary trajectory. We show that open source and proprietary models have a significant performance gap, and complex tasks like State Dependency, Canonicalization and Insufficient Information defined in TOOLSANDBOX are challenging even the most capable SOTA LLMs, providing brandnew insights into tool-use LLM capabilities. |
2105.09938 | While programming is one of the most broadly applicable skills in modern society, it is unclear how well state-of-the-art machine learning models can write code. Despite its importance, there has been surprisingly little work on evaluating code generation, and it can be difficult to assess code generation performance in an accurate and rigorous manner. To meet this challenge, we introduce APPS, a benchmark for code generation. Unlike prior work in more restricted settings, our benchmark measures the ability of models to take an arbitrary natural language specification and generate satisfactory Python code. Similar to how companies assess candidate software developers, we evaluate models by checking their generated code on test cases. Our benchmark includesproblems, which range from having simple one-line solutions to being substantial algorithmic challenges. We fine-tune large language models on both GitHub and our training set, and we find that the prevalence of syntax errors is decreasing exponentially as models improve. Recent models such as GPT-Neo can pass approximatelyof the test cases of introductory problems, so we find that machine learning models are now beginning to learn how to code. As the social significance of automatic code generation increases over the coming years, our benchmark can provide an objective measure for tracking advancements. |
2310.17813 | The push to train ever larger neural networks has motivated the study of initialization and training at large network width.
A key challenge is to scale training so that a network’s internal representations evolve nontrivially at all widths, a process known asfeature learning.
Here, we show that feature learning is achieved by scaling thespectral normof weight matrices and their updates like, in contrast to widely used but heuristic scalings based on Frobenius norm and entry size.
Our spectral scaling analysis also leads to an elementary derivation ofmaximal update parametrization.
All in all, we aim to provide the reader with a solid conceptual understanding of feature learning in neural networks. |
2408.04614 | We propose a new method, instruction backand-forth translation, to construct high-quality synthetic data grounded in world knowledge for aligning large language models (LLMs). Given documents from a web corpus, we generate and curate synthetic instructions using the backtranslation approach proposed by Li et al. (2023a), and rewrite the responses to improve their quality further based on the initial documents. Fine-tuning with the resulting (backtranslated instruction, rewritten response) pairs yields higher win rates on AlpacaEval than using other common instruction datasets such as Humpback, ShareGPT, Evol-Instruct, Open Orca, Alpaca-GPT4 and Self-instruct. We also demonstrate that rewriting the responses with an LLM outperforms direct distillation, and the two generated text distributions exhibit significant distinction in embedding space. Further analysis shows that our backtranslated instructions are of higher quality than other sources of synthetic instructions, while our responses are more diverse and complex than those obtained from distillation. Overall we find that instruction back-and-forth translation combines the best of both worlds—making use of the information diversity and quantity found on the web, while ensuring the quality of the responses which is necessary for effective alignment. |
2407.07726v1 | PaliGemma is an open Vision-Language Model (VLM) that is based on the SigLIP-So400m vision encoder and the Gemma-2B language model.
It is trained to be a versatile and broadly knowledgeable base model that is effective to transfer. It achieves strong performance on a wide variety of open-world tasks.
We evaluate PaliGemma on almost 40 diverse tasks including standard VLM benchmarks, but also more specialized tasks such as remote-sensing and segmentation. |
2407.21118 | KV-Cache compression methods generally sample a KV-Cache of effectual tokens or quantize it into lower bits.
However, these methods cannot exploit the redundancy of the hidden dimension of KV tensors.
This paper investigates a unique hidden dimension approach calledPalu, a novel KV-Cache compression framework that utilizes low-rank projection. Palu decomposes the linear layers into low-rank matrices, caches the smaller intermediate states, and reconstructs the full keys and values on the fly. To improve accuracy, compression rate, and efficiency,Palufurther encompasses (1) a medium-grained low-rank decomposition scheme, (2) an efficient rank search algorithm, (3) a low-rank-aware quantization algorithm, and (4) matrix fusion with optimized GPU kernels.
Our extensive experiments with popular LLMs show thatPalucan compress KV-Cache by more than91.25%while maintaining a significantlybetter accuracy (up to 1.19 lower perplexity) than state-of-the-art KV-Cache quantization methods at a similar or even higher memory usage. When compressing KV-Cache for 50%,Paludelivers up to1.61end-to-end speedupfor the attention module. Our code is publicly available at:https://github.com/shadowpa0327/Palu. |
2406.04604 | When using language models (LMs) to solve complex problems, humans might struggle to understand the LM-generated solutions and repair the flawed ones.
To assist humans in repairing them, we propose to automatically decompose complex solutions into multiple simpler pieces that correspond to specific subtasks. We introduce a novel objective for learning task decomposition, termedassistive value(AssistV) , which measures the feasibility and speed for humans to repair the decomposed solution. We collect a dataset of human repair experiences on different decomposed solutions. Utilizing the collected data as in-context examples, we then learn to critique, refine, and rank decomposed solutions to improveAssistV. We validate our method under competitive programming problems: under 177 hours of human study, our method enables non-experts to solve 33.3% more problems, speeds them up by 3.3x, and empowers them to match unassisted experts. |
2405.16528 | Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we proposeLoQT, a method for efficiently training quantized models.LoQTuses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, which we demonstrate experimentally for language modeling and downstream task adaptation. We find thatLoQTenables efficient training of models up to 7B parameters on a consumer-grade 24GB GPU. We also demonstrate the feasibility of training a 13B parameter model using per-layer gradient updates on the same hardware. https://github.com/sebulo/LoQT |
2408.02442 | Structured generation, the process of producing content in standardized formats like JSON and XML, is widely utilized in real-world applications to extract key output information from large language models (LLMs). This study investigates whether such constraints on generation space impact LLMs' abilities, including reasoning and domain knowledge comprehension. Specifically, we evaluate LLMs' performance when restricted to adhere to structured formats versus generating free-form responses across various common tasks. Surprisingly, we observe a significant decline in LLMs' reasoning abilities under format restrictions. Furthermore, we find that stricter format constraints generally lead to greater performance degradation in reasoning tasks. |
2407.09941 | A wide array of sequence models are built on a framework modeled after Transformers, comprising alternating sequence mixer and channel mixer layers. This paper studies a unifying *matrix mixer* view of sequence mixers that can be conceptualized as a linear map on the input sequence. This framework encompasses a broad range of well-known sequence models, including the self-attention of Transformers as well as recent strong alternatives such as structured state space models (SSMs), and allows understanding downstream characteristics such as efficiency and expressivity through properties of their structured matrix class. We identify a key axis of matrix parameterizations termed *sequence alignment*, which increases the flexibility and performance of matrix mixers, providing insights into the strong performance of Transformers and recent SSMs such as Mamba. Furthermore, the matrix mixer framework offers a systematic approach to developing sequence mixers with desired properties, allowing us to develop several new sub-quadratic sequence models. In particular, we propose a natural bidirectional extension of the Mamba model (**Hydra**), parameterized as a *quasiseparable matrix mixer*, which demonstrates superior performance over other sequence models including Transformers on non-causal tasks. As a drop-in replacement for attention layers, Hydra outperforms BERT by 0.8 points on the GLUE benchmark and ViT by 2% Top-1 accuracy on ImageNet. |
2004.11714 | Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation.
The dominant parametric approach is based on locally normalized models which predict one word at a time. While these
work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process.
In this work, we investigateun-normalizedenergy-based models (EBMs) which operate not at the token but at the sequence
level. In order to make training tractable, we first work in the residual of a pretrained locally normalized
language model and second we train using noise contrastive estimation. Furthermore, since the EBM works at the
sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa.
Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity
compared to locally normalized baselines. Moreover, generation via importance sampling is very efficient and of higher quality
than the baseline models according to human evaluation. |
2407.21075 | We present foundation language models developed to power Apple Intelligence features, including a3 billion parameter model designed to run efficiently on devices and a large server-based language model designed for Private Cloud Compute.
These models are designed to perform a wide range of tasks efficiently, accurately, and responsibly.
This report describes the model architecture, the data used to train the model, the training process, how the models are optimized for inference, and the evaluation results. We highlight our focus on Responsible AI and how the principles are applied throughout the model development. |
2408.02666 | Model-based evaluation is at the heart of successful model development - as a reward model for training, and as a replacement for human evaluation. To train such evaluators, the standard approach is to collect a large amount of human preference judgments over model responses, which is costly and the data becomes stale as models improve. In this work, we present an approach that aims to improve evaluators *without human annotations*, using synthetic training data only. Starting from unlabeled instructions, our iterative selfimprovement scheme generates contrasting model outputs and trains an LLM-as-a-Judge to produce reasoning traces and final judgments, repeating this training at each new iteration using the improved predictions. Without any labeled preference data, our Self-Taught Evaluator can improve a strong LLM (Llama3-70BInstruct) from 75.4 to 88.3 (88.7 with majority vote) on RewardBench. This outperforms commonly used LLM judges such as GPT-4 and matches the performance of the top-performing reward models trained with labeled examples. |
2407.20224 | Knowledge editing techniques have been increasingly adopted to efficiently correct the false or outdated knowledge in Large Language Models (LLMs) , due to the high cost of retraining from scratch. Meanwhile, one critical but under-explored question is:can knowledge editing be used to inject harm into LLMs?In this paper, we propose to reformulate knowledge editing as a new type of safety threat for LLMs, namelyEditing Attack, and conduct a systematic investigation with a newly constructed datasetEditAttack. Specifically, we focus on two typical safety risks of Editing Attack includingMisinformation InjectionandBias Injection. For the risk of misinformation injection, we first categorize it intocommonsense misinformation injectionandlong-tail misinformation injection. Then, we
find thatediting attacks can inject both types of misinformation into LLMs, and the effectiveness is particularly high for commonsense misinformation injection. For the risk of bias injection, we discover that not only can biased sentences be injected into LLMs with high effectiveness, but alsoone single biased sentence injection can cause a bias increase in general outputs of LLMs, which are even highly irrelevant to the injected sentence, indicating a catastrophic impact on the overall fairness of LLMs. Then, we further illustrate thehigh stealthiness of editing attacks, measured by their impact on the
general knowledge and reasoning capacities of LLMs, and
show the hardness of defending editing attacks with empirical evidence. Our discoveries demonstrate the emerging misuse risks of knowledge editing techniques on compromising the safety alignment of LLMs.Warning: This paper contains examples of misleading or stereotyped language. |
2407.00121 | Large language models (LLMs) have recently shown tremendous promise in serving as the backbone to agentic systems, as demonstrated by their performance in multi-faceted, challenging benchmarks like SWE-Bench and Agent-Bench.
However, to realize the true potential of LLMs as autonomous agents, they must learn to identify, call, and interact with external tools and application program interfaces (APIs) to complete complex tasks. These tasks together are termedfunction calling.
Endowing LLMs with function calling abilities leads to a myriad of advantages, such as access to current and domain-specific information in databases and knowledge sources, and the ability to outsource tasks that can be reliably performed by tools, e.g., a Python interpreter or calculator.
While there has been significant progress in function calling with LLMs, there is still a dearth of open models that perform on par with proprietary LLMs like GPT, Claude, and Gemini.
Therefore, in this work, we introduce theGranite-20B-FunctionCalling111The model will be available soon athttps://huggingface.co/ibm-granite/model under an Apache 2.0 license. The model is trained using a multi-task training approach on seven fundamental tasks encompassed in function calling, those being Nested Function Calling, Function Chaining, Parallel Functions, Function Name Detection, Parameter-Value Pair Detection, Next-Best Function, and Response Generation. We present a comprehensive evaluation on multiple out-of-domain datasets comparingGranite-20B-FunctionCallingto more than 15 other best proprietary and open models.Granite-20B-FunctionCallingprovides the best performance among all open models on the Berkeley Function Calling Leaderboard and fourth overall.
As a result of the diverse tasks and datasets used for training our model, we show thatGranite-20B-FunctionCallinghas better generalizability on multiple tasks in seven different evaluation datasets. |
2407.20243 | Embeddings from Large Language Models (LLMs) have emerged as critical components in various applications, particularly for information retrieval.
While high-dimensional embeddings generally demonstrate superior performance as they contain more salient information, their practical application is frequently hindered by elevated computational latency and the associated higher cost.
To address these challenges, we propose Matryoshka-Adaptor, a novel tuning framework designed for the customization of LLM embeddings.
Matryoshka-Adaptor facilitates substantial dimensionality reduction while maintaining comparable performance levels, thereby achieving a significant enhancement in computational efficiency and cost-effectiveness.
Our framework directly modifies the embeddings from pre-trained LLMs which is designed to be seamlessly integrated with any LLM architecture, encompassing those accessible exclusively through black-box APIs.
Also, it exhibits efficacy in both unsupervised and supervised learning settings.
A rigorous evaluation conducted across a diverse corpus of English, multilingual, and multimodal datasets consistently reveals substantial gains with Matryoshka-Adaptor.
Notably, with Google and OpenAI Embedding APIs, Matryoshka-Adaptor achieves a reduction in dimensionality ranging from two- to twelve-fold without compromising performance across multiple BEIR datasets. |
2310.10505 | Alignment is crucial for training large language models. The predominant strategy is Reinforcement Learning from Human Feedback (RLHF) , with Proximal Policy Optimization (PPO) as the de-facto algorithm. Yet, PPO is known to struggle with computational inefficiency, a challenge that this paper aims to address. We identify three important properties of RLHF tasks: fast simulation, deterministic transitions, and trajectory-level rewards, which arenotleveraged in PPO. Based on these properties, we develop ReMax, a new algorithm tailored for RLHF. The design of ReMax builds on the celebrated algorithm REINFORCE but is enhanced with a new variance-reduction technique. ReMax offers threefold advantages over PPO: first, it is simple to implement with just 6 lines of code. It further eliminates more than 4 hyper-parameters in PPO, which are laborious to tune. Second, ReMax reduces memory usage by about 50%. To illustrate, PPO runs out of memory when fine-tuning a Llama2-7B model on A100-80GB GPUs, whereas ReMax can support the training. Even though memory-efficient techniques (e.g., ZeRO and offload) are employed for PPO to afford training, ReMax can utilize a larger batch size to increase throughput. Third, in terms of wall-clock time, PPO is about twice as slow as ReMax per iteration. Importantly, these improvements do not sacrifice task performance. We hypothesize that these advantages can be maintained in larger-scale models.111Our implementation of ReMax is available athttps://github.com/liziniu/ReMax |
2406.11939v1 | The rapid evolution of language models has necessitated the development of more challenging benchmarks. Current static benchmarks often struggle to consistently distinguish between the capabilities of different models and fail to align with real-world user preferences. On the other hand, live crowd-sourced platforms like the Chatbot Arena collect a wide range of natural prompts and user feedback. However, these prompts vary in sophistication and the feedback cannot be applied offline to new models. In order to ensure that benchmarks keep up with the pace of LLM development, we address how one can evaluate benchmarks on their ability to confidently separate models and their alignment with human preference. Under these principles, we developed BenchBuilder, a living benchmark that filters high-quality prompts from live data sources to enable offline evaluation on fresh, challenging prompts. BenchBuilder identifies seven indicators of a high-quality prompt, such as the requirement for domain knowledge, and utilizes an LLM annotator to select a high-quality subset of prompts from various topic clusters. The LLM evaluation process employs an LLM judge to ensure a fully automated, high-quality, and constantly updating benchmark. We apply BenchBuilder on prompts from the Chatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from a wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence intervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with human preference rankings, all at a cost of only $25 and without human labelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides a valuable tool for developers, enabling them to extract high-quality benchmarks from extensive data with minimal effort. |
2407.21787 | Scaling the amount of compute used to train language models has dramatically improved their capabilities. However, when it comes to inference, we often limit the amount of compute to only one attempt per problem. Here, we explore inference compute as another axis for scaling by increasing the number of generated samples. Across multiple tasks and models, we observe that coverage – the fraction of problems solved by any attempt – scales with the number of samples over four orders of magnitude. In domains like coding and formal proofs, where all answers can be automatically verified, these increases in coverage directly translate into improved performance.
When we apply repeated sampling to SWE-bench Lite, the fraction of issues solved with DeepSeek-V2-Coder-Instruct increases from 15.9% with one sample to 56% with 250 samples, outperforming the single-attempt state-of-the-art of 43% which uses more capable frontier models.
Moreover, using current API pricing, amplifying the cheaper DeepSeek model with five samples is more cost-effective and solves more issues than paying a premium for one sample from GPT-4o or Claude 3.5 Sonnet. Interestingly, the relationship between coverage and the number of samples is often log-linear and can be modelled with an exponentiated power law, suggesting the existence of inference-time scaling laws.
Finally, we find that identifying correct samples out of many generations remains an important direction for future research in domains without automatic verifiers.
When solving math word problems from GSM8K and MATH, coverage with Llama-3 models grows to over 95% with 10,000 samples.
However, common methods to pick correct solutions from a sample collection, such as majority voting or reward models, plateau beyond several hundred samples and fail to fully scale with the sample budget. |
2407.21770 | We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7overall, with 2.6for text and 5.2for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3× overall FLOPs savings (3for text, 2.8for image) . Combining MoMa with mixture-of-depths (MoD) further improves pretraining FLOPs savings to 4.2overall (text: 3.4, image: 5.3) ,
although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa’s potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems. |
2010.13369 | Recently, Transformer-based language models have demonstrated remarkable performance across many NLP domains.
However, the unsupervised pre-training step of these models suffers from unbearable overall computational expenses.
Current methods for accelerating the pre-training either rely on massive parallelism with advanced hardware or are not applicable to language modeling.
In this work, we propose a method based onprogressive layer droppingthat speeds the training of Transformer-based language models, not at the cost of excessive hardware resources but from model architecture change and training technique boosted efficiency.
Extensive experiments on BERT show that the proposed method achieves
a 24% time reduction on average per sample and allows the pre-training to be 2.5faster than the baseline to get a similar accuracy on downstream tasks. While being faster, our pre-trained models are equipped with strong knowledge transferability, achieving comparable and sometimes higher GLUE score than the baseline when pre-trained with the same number of samples. |
2304.07193 | The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision.
These models could greatly simplify the use of images in any system by producing general-purpose visual features, i.e., features that work across image distributions and tasks without finetuning.
This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources.
We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size.
Most of the technical contributions aim at accelerating and stabilizing the training at scale.
In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature.
In terms of models, we train a ViT modelwith 1B parameters and distill it into a series of smaller models that surpass the best available general-purpose features, OpenCLIPon most of the benchmarks at image and pixel levels. |
2407.21018 | Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications by leveraging increased model sizes and sequence lengths.
However, the associated rise in computational and memory costs poses significant challenges, particularly in managing long sequences due to the quadratic complexity of the transformer attention mechanism.
This paper focuses on the long-context scenario, addressing the inefficiencies in KV cache memory consumption during inference.
Unlike existing approaches that optimize the memory based on the sequence lengths, we uncover that the channel dimension of the KV cache exhibits significant redundancy, characterized by unbalanced magnitude distribution and low-rank structure in attention weights.
Based on these observations, we propose ThinK, a novel query-dependent KV cache pruning method designed to minimize attention weight loss while selectively pruning the least significant channels. Our approach not only maintains or enhances model accuracy but also achieves a reduction in memory costs by over 20% compared with vanilla KV cache eviction methods. Extensive evaluations on the LLaMA3 and Mistral models across various long-sequence datasets confirm the efficacy of ThinK, setting a new precedent for efficient LLM deployment without compromising performance. We also outline the potential of extending our method to value cache pruning, demonstrating ThinK’s versatility and broad applicability in reducing both memory and computational overheads. |
2310.11451 | Large Language Models (LLMs) inherently encode a wealth of knowledge within their parameters through pre-training on extensive corpora. While prior research has delved into operations on these parameters to manipulate the underlying implicit knowledge—encompassing detection, editing, and merging—there remains an ambiguous understanding regarding their transferability across models with varying scales. In this paper, we seek to empirically investigate knowledge transfer from larger to smaller models through a parametric perspective. To achieve this, we employ sensitivity-based techniques to extract and align knowledge-specific parameters between different LLMs. Moreover, the LoRA module is used as the intermediary mechanism for injecting the extracted knowledge into smaller models. Evaluations across four benchmarks validate the efficacy of our proposed method. Our findings highlight the critical factors contributing to the process of parametric knowledge transfer, underscoring the transferability of model parameters across LLMs of different scales. We release code and data athttps://github.com/maszhongming/ParaKnowTransfer. |
2402.12219 | The quality of finetuning data is crucial for aligning large language models (LLMs) with human values.
Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations.
This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach namedReAlign, whichreformatsthe responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence.
This approach minimizes human annotation, hallucination, and the difficulty in scaling, remaining orthogonal to existing alignment techniques.
Experimentally,ReAlignsignificantly boosts the general alignment ability, math reasoning, factuality, and readability of the LLMs. Encouragingly,withoutintroducing any additional data or advanced training techniques, and merely by reformatting the response, LLaMA-2-13B’s mathematical reasoning ability onGSM8Kcan be improvedfrom 46.77% to 56.63%in accuracy.
Additionally, a mere 5% ofReAligndata yields a 67% boost in general alignment ability measured by the Alpaca dataset.
This work highlights the need for further research into thescienceandmechanistic interpretabilityof LLMs. We have made the associated code and data publicly accessible to support future studies athttps://github.com/GAIR-NLP/ReAlign. |
2207.02598 | Machine learning (ML) models are typically optimized for their accuracy on a given dataset.
However, this predictive criterion rarely captures all desirable properties of a model, in particular how well it matches a domain expert’sunderstandingof a task.
Underspecificationrefers to the existence of multiple models that are indistinguishable in their in-domain accuracy, even though they differ in other desirable properties such as out-of-distribution (OOD) performance.
Identifying these situations is critical for assessing the reliability of ML models. We formalize the concept of underspecification and propose a method to identify and partially address it.
We train multiple models with an independence constraint that forces them to implement different functions.
They discover predictive features that are otherwise ignored by standard empirical risk minimization (ERM) , which we then distill into a global model with superior OOD performance.
Importantly, we constrain the models to align with the data manifold to ensure that they discover meaningful features.
We demonstrate the method on multiple datasets in computer vision (collages, WILDS-Camelyon17, GQA) and discuss general implications of underspecification.
Most notably, in-domain performance cannot serve for OOD model selection without additional assumptions. |
2011.03395 | ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain. |
2405.14838 | When leveraging language models for reasoning tasks, generating explicit chain-of-thought (CoT) steps often proves essential for achieving high accuracy in final outputs. In this paper, we investigate if models can be taught to internalize these CoT steps. To this end, we propose a simple yet effective method for internalizing CoT steps: starting with a model trained for explicit CoT reasoning, we gradually remove the intermediate steps and finetune the model. This process allows the model to internalize the intermediate reasoning steps, thus simplifying the reasoning process while maintaining high performance. Our approach enables a GPT-2 Small model to solve 9-by-9 multiplication with up to 99% accuracy, whereas standard training cannot solve beyond 4-by-4 multiplication. Furthermore, our method proves effective on larger language models, such as Mistral 7B, achieving over 50% accuracy on GSM8K without producing any intermediate steps. |
1901.09335 | Large-batch SGD is important for scaling training of deep neural networks.
However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered.
We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling.
We analyze the effect of batch augmentation on gradient variance, and show that it empirically improves convergence for a wide variety of deep neural networks and datasets.
Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art.
Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently. |
1902.05509 | MultiGrain is a network architecture producing compact vector representations that are suited both for image classification and particular object retrieval.
It builds on a standard classification trunk.
The top of the network produces an embedding containing coarse and fine-grained information, so that images can be recognized based on the object class, particular object, or if they are distorted copies.
Our joint training is simple: we minimize a cross-entropy loss for classification and a ranking loss
that determines if two images are identical up to data augmentation, with no need for additional labels.
A key component of MultiGrain is a pooling layer that takes advantage of high-resolution images with a network trained at a lower resolution. When fed to a linear classifier, the learned embeddings provide state-of-the-art classification accuracy.
For instance, we obtain 79.4% top-1 accuracy with a ResNet-50 learned on Imagenet, which is a +1.8% absolute improvement over the AutoAugment method.
When compared with the cosine similarity, the same embeddings perform on par with the state-of-the-art for image retrieval at moderate resolutions. |
2406.15972 | Continual learning aims to allow models to learn new tasks without forgetting what has been learned before. This work introduces Elastic Variational Continual Learning with Weight Consolidation (EVCL) , a novel hybrid model that integrates the variational posterior approximation mechanism of Variational Continual Learning (VCL) with the regularization-based parameter-protection strategy of Elastic Weight Consolidation (EWC) . By combining the strengths of both methods, EVCL effectively mitigates catastrophic forgetting and enables better capture of dependencies between model parameters and task-specific data. Evaluated on five discriminative tasks, EVCL consistently outperforms existing baselines in both domain-incremental and task-incremental learning scenarios for deep discriminative models. |
2403.14606 | Artificial intelligence has recently experienced remarkable advances, fueled by
large models, vast datasets, accelerated hardware, and, last but not least,
the transformative power of differentiable programming. This new programming
paradigm enables end-to-end differentiation of complex computer programs
(including those with control flows and data structures) , making gradient-based
optimization of program parameters possible. As an emerging paradigm, differentiable programming builds upon several areas
of computer science and applied mathematics, including automatic
differentiation, graphical models, optimization and statistics. This book
presents a comprehensive review of the fundamental concepts useful for
differentiable programming. We adopt two main perspectives, that of
optimization and that of probability, with clear analogies between the two. Differentiable programming is not merely the differentiation of
programs, but also the thoughtful design of programs intended for
differentiation. By making programs differentiable, we inherently introduce
probability distributions over their execution, providing a means to quantify
the uncertainty associated with program outputs. |
2206.07137 | Training on web-scale data can take months. But most computation and time is wasted onredundantand noisy points that are already learnt or not learnable. To accelerate training, we introduceReducible Holdout Loss Selection(RHO-LOSS) , a simple but principled technique which selects approximately those points for training that most reduce the model’s generalization loss. As a result, RHO-LOSS mitigates the weaknesses of existing data selection methods: techniques from the optimization literature typically select “hard” (e.g. high loss) points, but such points are often noisy (not learnable) or less task-relevant. Conversely, curriculum learning prioritizes “easy” points, but such points need not be trained on once learnt. In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and architectures (MLPs, CNNs, and BERT) . On the large web-scraped image datasetClothing-1M, RHO-LOSS trains in 18x fewer steps and reaches 2% higher final accuracy than uniform data shuffling.††Code:https://github.com/OATML/RHO-Loss |
2404.01413 | The proliferation of generative models, combined with pretraining on web-scale data, raises a timely question: what happens when these models are trained on their own generated outputs? Recent investigations into model-data feedback loops proposed that such loops would lead to a phenomenon termedmodel collapse, under which performance progressively degrades with each model-data feedback iteration until fitted models become useless. However, those studies largely assumed that new datareplaceold data over time, where an arguably more realistic assumption is that dataaccumulateover time. In this paper, we ask: what effect does accumulating data have on model collapse?
We empirically study this question by pretraining sequences of language models on text corpora. We confirm that replacing the original real data by each generation’s synthetic data does indeed tend towards model collapse, then demonstrate that accumulating the successive generations of synthetic data alongside the original real data avoids model collapse; these results hold across a range of model sizes, architectures, and hyperparameters. We obtain similar results for deep generative models on other types of real data: diffusion models for molecule conformation generation and variational autoencoders for image generation. To understand why accumulating data can avoid model collapse, we use an analytically tractable framework introduced by prior work in which a sequence of linear models are fit to the previous models’ outputs. Previous work used this framework to show that if data are replaced, the test error increases with the number of model-fitting iterations; we extend this argument to prove that if data instead accumulate, the test error has a finite upper bound independent of the number of iterations, meaning model collapse no longer occurs.
Our work provides consistent empirical and theoretical evidence that data accumulation avoids model collapse. |
2407.17465 | The Maximal Update Parametrization (µP) aims to make the optimal hyperparameters (HPs) of a model independent of its size, allowing them to be swept using a cheap proxy model rather than the full-size target model.
We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision.
The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that activations, weights and gradients begin training with a scale of one.
This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. |
2202.12837 | Large language models (LMs) are able to in-context learn—perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs.
However, there has been little understanding ofhowthe model learns andwhichaspects of the demonstrations contribute to end task performance.
In this paper, we show that ground truth demonstrations are in fact not required—randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3.
Instead, we find that other aspects of the demonstrations are the key drivers of end task performance, including the fact that they provide a few examples of
(1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence.
Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone. |
2402.18819 | In-context learning (ICL) exhibits dual operating modes:task learning,i.e. acquiring a new skill from in-context samples, andtask retrieval,i.e., locating and activating a relevant pretrained skill.
Recent theoretical work investigates various mathematical models to analyze ICL, but existing models explain only one operating mode at a time.
We introduce a probabilistic model, with which one can explain the dual operating modes of ICL simultaneously.
Focusing on in-context learning of linear functions, we extend existing models for pretraining data by introducing multiple task groups and task-dependent input distributions.
We then analyze the behavior of the optimally pretrained model under the squared loss,i.e., the MMSE estimator of the label given in-context examples.
Regarding pretraining task distribution as prior and in-context examples as the observation, we derive the closed-form expression of the task posterior distribution.
With the closed-form expression, we obtain a quantitative understanding of the two operating modes of ICL.
Furthermore, we shed light on an unexplained phenomenon observed in practice: under certain settings, the ICL risk initially increases and then decreases with more in-context examples.
Our model offers a plausible explanation for this “early ascent” phenomenon: a limited number of in-context samples may lead to the retrieval of an incorrect skill, thereby increasing the risk, which will eventually diminish as task learning takes effect with more in-context samples.
We also theoretically analyze ICL with biased labels,e.g., zero-shot ICL, where in-context examples are assigned random labels.
Lastly, we validate our findings and predictions via experiments involving Transformers and large language models.
The code for our project is available in the GitHub repository:https://github.com/UW-Madison-Lee-Lab/Dual_Operating_Modes_of_ICL. |
2407.14414 | Language models can be used to solve long-horizon planning problems in two distinct modes. In afast‘System-’ mode, models directly generate plans without any explicit search or backtracking, and in aslow‘System-’ mode, they plan step-by-step by explicitly searching over possible actions. While System-planning is typically more effective, it is also more computationally expensive, often making it infeasible for long plans or large action spaces. Moreover, isolated System-or System-planning ignores the user’s end goals and constraints (e.g., token budget) , failing to provide ways for the user to control their behavior. To this end, we propose theSystem-Planner, a controllable planning framework with language models that is capable of generating hybrid plans and balancing between the two planning modes based on the difficulty of the problem at hand. System-consists of (i) a controller, (ii) a System-Planner, and (iii) a System-Planner. Based on a user-specified hybridization factorgoverning the degree to which the system uses System-vs. System-, the controller decomposes a planning problem into sub-goals, and classifies them as easy or hard to be solved by either System-or System-, respectively. We fine-tune all three components on top of a single base LLM, requiring only search traces as supervision. Experiments with two diverse planning tasks – Maze Navigation and Blocksworld – show that our System-Planner outperforms a System-Planner, a System-Planner trained to approximate A∗search, and also a symbolic planner (A∗search) , given an exploration budget. We also demonstrate the following key properties of our planner: (1) controllability: by adjusting the hybridization factor(e.g., System-vs. System-) we can perform more (or less) search, improving performance, (2) flexibility: by building a neuro-symbolic variant composed of a neural System-planner and a symbolic System-planner, we can take advantage of existing symbolic methods, and (3) generalizability: by learning from different search algorithms (BFS, DFS, A∗) , we show that our method is robust to the choice of search algorithm used for training.111Code available at:https://github.com/swarnaHub/System-1.x |
2406.11794 | We introduce DataComp for Language Models (DCLM) , a testbed for controlled dataset experiments with the goal of improving language models.
As part ofDCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations.
Participants in theDCLMbenchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at
model scales ranging from 412M to 7B parameters.
As a baseline forDCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set.
The resulting dataset,DCLM-baseline, enables training a 7B parameter language model from scratch to% 5-shot accuracy on MMLU with 2.6T training tokens.
Compared to MAP-Neo, the previous state-of-the-art in open-data language models,DCLM-baselinerepresents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute.
Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%) , and performs similarly on an average of 53 natural language understanding tasks while being trained withless compute than Llama 3 8B.
Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.
We release theDCLMbenchmark, framework, models, and datasets athttps://datacomp.ai/dclm. |
2403.13787 | Reward models (RMs) are at the crux of successful RLHF to align pretrained models to human preferences, yet there has been relatively little study that focuses on evaluation of those reward models.
Evaluating reward models presents an opportunity to understand the opaque technologies used for alignment of language models and which values are embedded in them.
To date, very few descriptors of capabilities, training methods, or open-source reward models exist.
In this paper, we presentRewardBench, a benchmark dataset and code-base for evaluation, to enhance scientific understanding of reward models.
TheRewardBenchdataset is a collection of prompt-win-lose trios spanning chat, reasoning, and safety, to benchmark how reward models perform on challenging, structured and out-of-distribution queries.
We created specific comparison datasets for RMs that have subtle, but verifiable reasons (e.g. bugs, incorrect facts) why one answer should be preferred to another.
On theRewardBenchleaderboard, we evaluate reward models trained with a variety of methods, such as the direct MLE training of classifiers and the implicit reward modeling of Direct Preference Optimization (DPO) , and on a spectrum of datasets.
We present many findings on propensity for refusals, reasoning limitations, and instruction following shortcomings of various reward models towards a better understanding of the RLHF process. |
1911.00172v2 | We introduceNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a-nearest neighbors (NN) model.
The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data.
Applying this augmentation to a strongWikitext-103LM, with neighbors drawn from the original training set, ourNN-LM achieves a new state-of-the-art perplexity of 15.79 – a 2.9 point improvement with no additional training.
We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training.
Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge.
Together, these results strongly suggest that learning similarity between sequences of text
is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail. |
2407.10969 | We introduce,Q-Sparse, a simple yet effective approach to training sparsely-activated large language models (LLMs) .Q-Sparseenablesfull sparsity of activationsin LLMs which can bring significant efficiency gains in inference. This is achieved by applying top-sparsification to the activations and the straight-through-estimator to the training. We also introduceBlockQ-Sparsefor batch training and inference. The key results from this work are, (1) Q-Sparsecan achieve results comparable to those of baseline LLMs while being much more efficient at inference time;
(2) We present an inference-optimal scaling law for sparsely-activated LLMs; (3) Q-Sparseis effective in different settings, including training-from-scratch, continue-training of off-the-shelf LLMs, and finetuning; (4) Q-Sparseworks for both full-precision and 1-bit LLMs (e.g.,BitNet b1.58) . Particularly, the synergy of BitNet b1.58 andQ-Sparse(can be equipped with MoE) provides the cornerstone and a clear path to revolutionize the efficiency, including cost and energy consumption, of future LLMs. |
2407.12077 | We introduce GoldFinch, a hybrid Linear Attention/Transformer sequence model that uses a new technique to efficiently generate a highly compressed and reusable KV-Cache in linear time and space with respect to sequence length. GoldFinch stacks our new GOLD transformer on top of an enhanced version of the Finch (RWKV-6) architecture. We train up to 1.5B parameter class models of the Finch, Llama, and GoldFinch architectures, and find dramatically improved modeling performance relative to both Finch and Llama. Our cache size savings increase linearly with model layer count, ranging from 756-2550 times smaller than the traditional transformer cache for common sizes, enabling inference of extremely large context lengths even on limited hardware. Although autoregressive generation has O(n) time complexity per token because of attention, pre-fill computation of the entire initial cache state for a submitted context costs only O(1) time per token due to the use of a recurrent neural network (RNN) to generate this cache. We release our trained weights and training code under the Apache 2.0 license for community use.111Code at:https://github.com/recursal/GoldFinch-paperModel weights at:https://huggingface.co/recursal/GoldFinch-paper |
2407.12267 | We present a new approach for generating 3D house wireframes with semantic enrichment using an autoregressive model. Unlike conventional generative models that independently process vertices, edges, and faces, our approach employs a unified wire-based representation for improved coherence in learning 3D wireframe structures. By re-ordering wire sequences based on semantic meanings, we facilitate seamless semantic integration during sequence generation. Our two-phase technique merges a graph-based autoencoder with a transformer-based decoder to learn latent geometric tokens and generate semantic-aware wireframes. Through iterative prediction and decoding during inference, our model produces detailed wireframes that can be easily segmented into distinct components, such as walls, roofs, and rooms, reflecting the semantic essence of the shape. Empirical results on a comprehensive house dataset validate the superior accuracy, novelty, and semantic fidelity of our model compared to existing generative models.
More results and details can be found onhttps://vcc.tech/research/2024/3DWire. |
2406.19223 | Tokenizers are crucial for encoding information in Large Language Models, but their development has recently stagnated, and they contain inherent weaknesses. Major limitations include computational overhead, ineffective vocabulary use, and unnecessarily large embedding and head layers. Additionally, their performance is biased towards a reference corpus, leading to reduced effectiveness for underrepresented languages.
To remedy these issues, we proposeT-Freewhich directly embeds words through sparse activation patterns over character triplets, and does not require a reference corpus.T-Freeinherently exploits morphological similarities and allows for strong compression of embedding layers.
In our exhaustive experimental evaluation, we achieve competitive downstream performance with a parameter reduction
of more than 85% on these layers.
Further,T-Freeshows significant improvements incross-lingual transfer learning. |
2402.03496 | Adaptive gradient optimizers like Adam(W) are the default training algorithms for many deep learning architectures, such as transformers.
Their diagonal preconditioner is based on the gradient outer product which is incorporated into the parameter update via a square root.
While these methods are often motivated as approximate second-order methods, the square root represents a fundamental difference.
In this work, we investigate how the behavior of adaptive methods changes when we remove the root, i.e. strengthen their second-order motivation.
Surprisingly, we find that such square-root-free adaptive methodsclosethe generalization gap to SGD on convolutional architectures, whilemaintainingtheir root-based counterpart’s performance on transformers.
The second-order perspective also has practical benefits for the development of adaptive methods with non-diagonal preconditioner.
In contrast to root-based counterparts like Shampoo, they do not require numerically unstable matrix square roots and therefore work well in low precision, which we demonstrate empirically.
This raises important questions regarding the currently overlooked role of adaptivity for the success of adaptive methods since the success is often attributed to sign descent induced by the root. |
2407.01449 | Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts. While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation.
To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval BenchmarkViDoRe, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings.
The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture,ColPali, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages. Combined with a late interaction matching mechanism,ColPalilargely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable.
We release all project artifacts athttps://huggingface.co/vidore. |
2407.11966 | Good weight initialization serves as an effective measure to reduce the training cost of a deep neural network (DNN) model. The choice of how to initialize parameters is challenging and may require manual tuning, which can be time-consuming and prone to human error. To overcome such limitations, this work takes a novel step towards building aweight generatorto synthesize the neural weights for initialization. We use the image-to-image translation task with generative adversarial networks (GANs) as an example due to the ease of collecting model weights spanning a wide range. Specifically, we first collect a dataset with various image editing concepts and their corresponding trained weights, which are later used for the training of the weight generator. To address the different characteristics among layers and the substantial number of weights to be predicted, we divide the weights into equal-sized blocks and assign each block an index. Subsequently, a diffusion model is trained with such a dataset using both text conditions of the concept and the block indexes. By initializing the image translation model with the denoised weights predicted by our diffusion model, the training requires onlyseconds. Compared to training from scratch (i.e., Pix2pix) , we achieve atraining time acceleration for a new concept while obtaining even better image generation quality. |
2401.16380 | Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased.
Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained.
This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web.
In this work, we proposeWebRephraseAugmentedPre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the web in specific styles such as “like Wikipedia” or in “question-answer format” to jointly pre-train LLMs on real and synthetic rephrases. First, we show that usingWRAPon the C4 dataset, which is naturally noisy,
speeds up pre-training by. At the same pre-training compute budget, it
improves perplexity by more than 10% on average across different subsets of the Pile, and improves zero-shot question answer accuracy across 13 tasks by more than 2%. Second, we investigate the impact of the re-phrasing style on the performance of the model, offering insights into how the composition of the training data can impact the performance of LLMs in OOD settings. Our gains are attributed to the fact that re-phrased synthetic data has higher utility than just real data because it (i) incorporates style diversity that closely reflects downstream evaluation style, and (ii) has higher ‘quality’ than web-scraped data. |
2406.18518 | The advancement of function-calling agent models requires diverse, reliable, and high-quality datasets. This paper presents APIGen, an automated data generation pipeline designed to synthesize verifiable high-quality datasets for function-calling applications. We leverage APIGen and collect 3,673 executable APIs across 21 different categories to generate diverse function-calling datasets in a scalable and structured manner. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, ensuring its reliability and correctness. We demonstrate that models trained with our curated datasets, even with only 7B parameters, can achieve state-of-the-art performance on the Berkeley Function-Calling Benchmark, outperforming multiple GPT-4 models. Moreover, our 1B model achieves exceptional performance, surpassing GPT-3.5-Turbo and Claude-3 Haiku. We release a dataset containing 60,000 high-quality entries, aiming to advance the field of function-calling agent domains. The dataset is available on Huggingface 1and the project homepage 2. |
2406.00770 | Fine-tuning large pre-trained language models with Evol-Instruct has achieved encouraging results across a wide range of tasks. However, designing effective evolving methods for instruction evolution requires substantial human expertise. This paper proposesAuto Evol-Instruct, an end-to-end framework that evolves instruction datasets using large language models without any human effort. The framework automatically analyzes and summarizes suitable evolutionary strategies for the given instruction data and iteratively improves the evolving method based on issues exposed during the instruction evolution process. Our extensive experiments demonstrate that the best method optimized byAuto Evol-Instructoutperforms human-designed methods on various benchmarks, including MT-Bench, AlpacaEval, GSM8K, and HumanEval. |
2407.09450 | Large language models (LLMs) have shown remarkable capabilities, but still struggle with processing extensive contexts, limiting their ability to maintain coherence and accuracy over long sequences. In contrast, the human brain excels at organising and retrieving episodic experiences across vast temporal scales, spanning a lifetime. In this work, we introduce EM-LLM, a novel approach that integrates key aspects of human episodic memory and event cognition into LLMs, enabling them to effectively handle practically infinite context lengths while maintaining computational efficiency. EM-LLM organises sequences of tokens into coherent episodic events using a combination of Bayesian surprise and graph-theoretic boundary refinement in an on-line fashion. When needed, these events are retrieved through a two-stage memory process, combining similarity-based and temporally contiguous retrieval for efficient and human-like access to relevant information. Experiments on the LongBench dataset demonstrate EM-LLM’s superior performance, outperforming the state-of-the-art InfLLM model with an overall relative improvement ofacross various tasks, including aimprovement on the PassageRetrieval task. Furthermore, our analysis reveals strong correlations between EM-LLM’s event segmentation and human-perceived events, suggesting a bridge between this artificial system and its biological counterpart. This work not only advances LLM capabilities in processing extended contexts but also provides a computational framework for exploring human memory mechanisms, opening new avenues for interdisciplinary research in AI and cognitive science. |
2407.07565 | In this paper we consider contamination by code generation test sets, in particular in their use in modern large language models.
We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection. Key to our findings is a new dataset of 161 prompts with their associated python solutions, dataset which is released athttps://huggingface.co/datasets/CohereForAI/lbpp. |
2406.15877 | Automated software engineering has been greatly empowered by the recent advances in Large Language Models (LLMs) for programming. While current benchmarks have shown that LLMs can perform various software engineering tasks like human developers, the majority of their evaluations are limited to short and self-contained algorithmic tasks.
Solving challenging and practical programming tasks requires the capability of utilizingdiverse function calls as toolsto efficiently implement functionalities like data analysis and web development.
In addition, using multiple tools to solve a task needs compositional reasoning by accurately understandingcomplex instructions. Fulfilling both of these characteristics can pose a great challenge for LLMs.
To assess how well LLMs can solve challenging and practical programming tasks, we introduceBigCodeBench, a benchmark that challenges LLMs to invoke multiple function calls as tools from 139 libraries and 7 domains for 1,140 fine-grained programming tasks.
To evaluate LLMs rigorously, each programming task encompasses 5.6 test cases with an average branch coverage of 99%.
In addition, we propose a natural-language-oriented variant ofBigCodeBench,BigCodeBench-Instruct, that automatically transforms the original docstrings into short instructions only with essential information. Our extensive evaluation of 60 LLMs shows thatLLMs are not yet capable of following complex instructions to use function calls precisely, with scores up to 60%, significantly lower than the human performance of 97%. The results underscore the need for further advancements in this area. |
2407.09025 | Spreadsheets are characterized by their extensive two-dimensional grids, flexible layouts, and varied formatting options, which pose significant challenges for large language models (LLMs) . In response, we introduceSpreadsheetLLM, pioneering an efficient encoding method designed to unleash and optimize LLMs’ powerful understanding and reasoning capability on spreadsheets. Initially, we propose a vanilla serialization approach that incorporates cell addresses, values, and formats. However, this approach was limited by LLMs’ token constraints, making it impractical for most applications. To tackle this challenge, we developSheetCompressor, an innovative encoding framework that compresses spreadsheets effectively for LLMs. It comprises three modules: structural-anchor-based compression, inverse index translation, and data-format-aware aggregation. It significantly improves performance in spreadsheet table detection task, outperforming the vanilla approach by 25.6% in GPT4’s in-context learning setting. Moreover, fine-tuned LLM withSheetCompressorhas an average compression ratio of 25×, but achieves a state-of-the-art 78.9% F1 score, surpassing the best existing models by 12.3%.
Finally, we propose Chain of Spreadsheet for downstream tasks of spreadsheet understanding and validate in a new and demanding spreadsheet QA task. We methodically leverage the inherent layout and structure of spreadsheets, demonstrating thatSpreadsheetLLMis highly effective across a variety of spreadsheet tasks. |
2406.06608 | Generative Artificial Intelligence (GenAI) systems are being increasingly deployed across all parts of industry and research settings. Developers and end users interact with these systems through the use of prompting or prompt engineering. While prompting is a widespread and highly researched concept, there exists conflicting terminology and a poor ontological understanding of what constitutes a prompt due to the area’s nascency. This paper establishes a structured understanding of prompts, by assembling a taxonomy of prompting techniques and analyzing their use. We present a comprehensive vocabulary of 33 vocabulary terms, a taxonomy of 58 text-only prompting techniques, and 40 techniques for other modalities. We further present a meta-analysis of the entire literature on natural language prefix-prompting. |
2012.15723 | The recent GPT-3 modelachieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context.
Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient.
We present LM-BFF—betterfew-shotfine-tuning oflanguagemodels111Alternatively, language models’bestfriendsforever.—a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context.
Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning.222Our implementation is publicly available athttps://github.com/princeton-nlp/LM-BFF. |
2310.16164v1 | Large Language Models (LLMs) are being increasingly employed in data science for tasks like data preprocessing and analytics. However, data scientists encounter substantial obstacles when conversing with LLM-powered chatbots and acting on their suggestions and answers. We conducted a mixed-methods study, including contextual observations, semi-structured interviews (n=14) , and a survey (n=114) , to identify these challenges. Our findings highlight key issues faced by data scientists, including contextual data retrieval, formulating prompts for complex tasks, adapting generated code to local environments, and refining prompts iteratively. Based on these insights, we propose actionable design recommendations, such as data brushing to support context selection, and inquisitive feedback loops to improve communications with AI-based assistants in data-science tools. |
2310.10134 | Language agents have shown some ability to interact with an external environment, e.g., a virtual
world such as ScienceWorld, to perform complex tasks, e.g., growing a plant, without the startup
costs of reinforcement learning. However, despite their zero-shot capabilities, these agents
to date do not continually improve over time, beyond performance refinement on a specific task.
Here we present CLIN, the first language-based agent to achieve this,
so that it continually improves over multiple trials, including when both the
environment and task are varied, and without requiring parameter updates.
Our approach is to use a persistent, dynamic, textual memory,
centered oncausal abstractions(rather than general “helpful hints”) ,
that is regularly updated after each trial so that the agent gradually learns useful
knowledge for new trials. In the ScienceWorld benchmark, CLIN is able to continually improve on repeated trials on the same task and environment,
outperforming state-of-the-art reflective language agents like Reflexion by 23 absolute points.
CLIN can also transfer its learning to new environments (or new tasks) , improving its zero-shot performance
by 4 points (13 for new tasks) and can further improve performance there through continual memory updates, enhancing
performance by an additional 17 points (7 for new tasks) .
This suggests a new architecture for agents built on frozen models that can still
continually and rapidly improve over time. |
2407.05872 | Robust and effective scaling of models from small to large width typically requires the precise adjustment of many algorithmic and architectural details, such as parameterization and optimizer choices. In this work, we propose a new perspective on parameterization by investigating a key assumption in prior work about the alignment between parameters and data and derive new theoretical results under weaker assumptions and a broader set of optimizers. Our extensive empirical investigation includes *tens of thousands* of models trained with *all combinations of* three optimizers, four parameterizations, several alignment assumptions, more than a dozen learning rates, and fourteen model sizes up to 26.8B parameters. We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work. Our results show that all parameterizations, not just maximal update parameterization (muP), can achieve hyperparameter transfer; moreover, our novel per-layer learning rate prescription for standard parameterization outperforms muP. Finally, we demonstrate that an overlooked aspect of parameterization, the epsilon parameter in Adam, must be scaled correctly to avoid gradient underflow and propose *Adamatan2*, a new numerically stable, scale-invariant version of Adam that eliminates the epsilon hyperparameter entirely. |
2405.14366 | A critical approach for efficiently deploying computationally demanding large language models (LLMs) is Key-Value (KV) caching. The KV cache stores key-value states of previously generated tokens, significantly reducing the need for repetitive computations and thereby lowering latency in autoregressive generation.
However, the size of the KV cache grows linearly with sequence length, posing challenges for applications requiring long context input and extensive sequence generation.
In this paper, we present a simple yet effective approach, called MiniCache, to compress the KV cache across layers from a novel depth perspective,
significantly reducing the memory footprint for LLM inference.
Our approach is based on the observation that KV cache states exhibit high similarity
between the adjacent layers
in the middle-to-deep portion of LLMs.
To facilitate merging, we propose disentangling the states into the magnitude and direction components, interpolating the directions of the state vectors while preserving their lengths unchanged. Furthermore, we introduce a token retention strategy to keep highly distinct state pairs unmerged, thus preserving the information with minimal additional storage overhead.
Our MiniCache is training-free and general, complementing existing KV cache compression strategies, such as quantization and sparsity.
We conduct a comprehensive evaluation of MiniCache utilizing various models including LLaMA-2, LLaMA-3, Phi-3, Mistral, and Mixtral across multiple benchmarks, demonstrating its exceptional performance in achieving superior compression ratios and high throughput.
On the ShareGPT dataset, LLaMA-2-7B with 4-bit MiniCache achieves a remarkable compression ratio of up to, enhances inference throughput by approximately, and reduces the memory footprint bycompared to the FP16 full cache baseline, all while maintaining near-lossless performance. |
2403.10616 | Progress in machine learning (ML) has been fueled by scaling neural network models. This scaling has been enabled by ever more heroic feats of engineering, necessary for accommodating ML approaches that require high bandwidth communication between devices working in parallel.
In this work, we propose a co-designed modular architecture and training approach for ML models, dubbed DIstributed PAth COmposition (DiPaCo) . During training, DiPaCo distributes computation by paths through a set of shared modules. Together with a Local-SGD inspired optimization (DiLoCo) that keeps modules in sync with drastically reduced communication,
Our approach facilitates training across poorly connected and heterogeneous workers, with a design that ensures robustness to worker failures and preemptions. At inference time, only a single path needs to be executed for each input, without the need for any model compression. We
consider this approach as a first prototype towards a new paradigm of large-scale learning, one that is less synchronous and more modular. Our experiments on the widely used C4 benchmark show that, for the same amount of training steps but less wall-clock time, DiPaCo exceeds the performance of a 1 billion-parameter dense transformer language model by choosing one of 256 possible paths, each with a size of 150 million parameters. |
2406.05184v2 | Generative text-to-image models enable us to synthesize unlimited amounts of images in a controllable manner, spurring many recent efforts to train vision models with synthetic data.
However, every synthetic image ultimately originates from the upstream data used to train the generator. What additional value does the intermediate generator provide over directly training on relevant parts of the upstream data?
Grounding this question in the setting of image classification, we compare finetuning on task-relevant, targeted synthetic data generated by Stable Diffusion—a generative model trained on the LAION-2B dataset—against finetuning on targeted real images retrieved directly from LAION-2B. We show that while synthetic data can benefit some downstream tasks, it is universally matched or outperformed by real data from the simple retrieval baseline.
Our analysis suggests that this underperformance is partially due to generator artifacts and inaccurate task-relevant visual details in the synthetic images.
Overall, we argue that retrieval is a critical baseline to consider when training with synthetic data—a baseline that current methods do not yet surpass. We release code, data, and models athttps://github.com/scottgeng00/unmet-promise. |
2306.05426 | In many domains, autoregressive models can attain high likelihood on the task of predicting the next observation.
However, this maximum-likelihood (MLE) objective does not necessarily match a downstream use-case of autoregressively generating high-quality sequences.
The MLE objective weights sequences proportionally to their frequency under the data distribution, with no guidance for the model’s behaviour out of distribution (OOD) : leading to compounding error during autoregressive generation.
In order to address this compounding error problem, we formulate sequence generation as an imitation learning (IL) problem. This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset, including divergences with weight on OOD generated sequences.
The IL framework also allows us to incorporate backtracking by introducing abackspaceaction into the generation process. This further mitigates the compounding error problem by allowing the model to revert a sampled token if it takes the sequence OOD.
Our resulting method, SequenceMatch, can be implemented without adversarial training or architectural changes.
We identify the SequenceMatch-divergence as a more suitable training objective for autoregressive models which are used for generation. We show that empirically, SequenceMatch training leads to improvements over MLE on text generation with language models. |
2403.09347 | Effective attention modules have played a crucial role in the success of Transformer-based large language models
(LLMs) , but the quadratic time and memory complexities of these attention modules also pose a challenge when processing long sequences.
One potential solution for the long sequence problem is to utilize distributed clusters to parallelize the computation of attention modules across multiple devices
(e.g., GPUs) .
However, adopting a distributed approach inevitably introduces extra memory overheads to store local attention results and incurs additional communication costs to aggregate local results into global ones.
In this paper, we propose a distributed attention framework named “BurstAttention” to optimize memory access and communication operations at both the global cluster and local device levels by partitioning attention along the sequence dimension across device.
Through our experiments, we compare BurstAttention with other competitive long-sequence distributed attention solutions.
The experimental results under different lengths demonstrate that BurstAttention offers significant advantages for processing long sequences compared with these competitive baselines, especially tensor parallelism (Megatron-V3) with FlashAttention, reducing 40% communication overheads and achieving 2speedup during training 128K sequence length on 8A100. |
2309.05516 | Large Language Models (LLMs) have proven their exceptional capabilities in performing language-related tasks. However, their deployment poses significant challenges due to their considerable memory and storage requirements. In response to this issue, weight-only quantization, particularly 3 and 4-bit weight-only quantization, has emerged as one of the most viable solutions. As the number of bits decreases, the quantization grid broadens, thus emphasizing the importance of up and down rounding. While previous studies have demonstrated that fine-tuning up and down rounding with the addition of perturbations can enhance accuracy in some scenarios, our study is driven by the precise and limited boundary of these perturbations, where only the threshold for altering the rounding value is of significance. Consequently, we propose a concise and highly effective approach for optimizing the weight rounding task. Our method, named SignRound, involves lightweight block-wise tuning using signed gradient descent, enabling us to achieve outstanding results within 400 steps. SignRound competes impressively against recent methods without introducing additional inference overhead.
The source code will be publicly available athttps://github.com/intel/neural-compressorsoon. |
2401.10491 | While training large language models (LLMs) from scratch can generate models with distinct functionalities and strengths, it comes at significant costs and may result in redundant capabilities. Alternatively, a cost-effective and compelling approach is to merge existing pre-trained LLMs into a more potent model. However, due to the varying architectures of these LLMs, directly blending their weights is impractical. In this paper, we introduce the notion of knowledge fusion for LLMs, aimed at combining the capabilities of existing LLMs and transferring them into a single LLM. By leveraging the generative distributions of source LLMs, we externalize their collective knowledge and unique strengths, thereby potentially elevating the capabilities of the target model beyond those of any individual source LLM. We validate our approach using three popular LLMs with different architectures—Llama-2, MPT, and OpenLLaMA—across various benchmarks and tasks. Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation. Our code, model weights, and data are public athttps://github.com/fanqiwan/FuseLLM. |
2305.14325 | Large language models (LLMs) have demonstrated remarkable capabilities in language generation, understanding, and few-shot learning in recent years. An extensive body of work has explored how their performance may be further improved through the tools of prompting, ranging from verification, self-consistency, or intermediate scratchpads.
In this paper, we present a complementary approach to improve language responses
where multiple language model instances propose and debate their individual responses and reasoning processes over multiple rounds to arrive at a common final answer.
Our findings indicate that this approach significantly enhances mathematical and strategic reasoning across a number of tasks. We also demonstrate that our approach improves the factual validity of generated content, reducing fallacious answers and hallucinations that contemporary models are prone to. Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate.
Overall, our findings suggest that such "society of minds" approach has the potential to significantly advance the capabilities of LLMs and pave the way for further breakthroughs in language generation and understanding. Project website athttps://composable-models.github.io/llm_debate/. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 33