bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
238
833
abstract
stringlengths
649
2.54k
title
stringlengths
31
135
authors
sequencelengths
1
31
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
10
upvotes
int64
-1
72
num_comments
int64
-1
5
n_authors
int64
-1
27
Models
sequencelengths
0
28
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
9
paper_page_exists_pre_conf
int64
0
1
unique_id
int64
0
298
null
https://openreview.net/forum?id=fib9qidCpY
@inproceedings{ hennigen2024towards, title={Towards Verifiable Text Generation with Symbolic References}, author={Lucas Torroba Hennigen and Zejiang Shen and Aniruddha Nrusimha and Bernhard Gapp and David Sontag and Yoon Kim}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=fib9qidCpY} }
LLMs are vulnerable to hallucinations, and thus their outputs generally require laborious human verification for high-stakes applications. To this end, we propose symbolically grounded generation (SymGen) as a simple approach for enabling easier manual validation of an LLM’s output. SymGen prompts an LLM to interleave its regular output text with explicit symbolic references to fields present in some conditioning data (e.g., a table in JSON format). The references can be used to display the provenance of different spans of text in the generation, reducing the effort required for manual verification. Across a range of data-to-text and question-answering exper- iments, we find that LLMs are able to directly output text that makes use of accurate symbolic references while maintaining fluency and factuality. In a human study we further find that such annotations can streamline human verification of machine-generated text.
Towards Verifiable Text Generation with Symbolic References
[ "Lucas Torroba Hennigen", "Zejiang Shen", "Aniruddha Nrusimha", "Bernhard Gapp", "David Sontag", "Yoon Kim" ]
Conference
Poster
2311.09188
[ "" ]
https://huggingface.co/papers/2311.09188
3
0
0
6
[]
[]
[]
1
100
null
https://openreview.net/forum?id=egVSgtJJAx
@inproceedings{ liu2024visualwebbench, title={VisualWebBench: How Far Have Multimodal {LLM}s Evolved in Web Page Understanding and Grounding?}, author={Junpeng Liu and Yifan Song and Bill Yuchen Lin and Wai Lam and Graham Neubig and Yuanzhi Li and Xiang Yue}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=egVSgtJJAx} }
Multimodal Large Language models (MLLMs) have shown promise in web-related tasks, but evaluating their performance in the web domain remains a challenge due to the lack of comprehensive benchmarks. Existing benchmarks are either designed for general multimodal tasks, failing to capture the unique characteristics of web pages, or focus on end-to-end web agent tasks, unable to measure fine-grained abilities such as OCR, understanding, and grounding. In this paper, we introduce VisualWebBench, a multimodal benchmark designed to assess the capabilities of MLLMs across a variety of web tasks. VisualWebBench consists of seven tasks, and comprises 1.5K human-curated instances from 139 real websites, covering 87 sub-domains. We evaluate 16 open-source MLLMs, Gemini Pro, Claude-3 series, and GPT-4V(ision) on VisualWebBench, revealing significant challenges and performance gaps. Further analysis highlights the limitations of current MLLMs, including inadequate grounding in text-rich environments and subpar performance with low-resolution image inputs. We believe VisualWebBench will serve as a valuable resource for the research community and contribute to the creation of more powerful and versatile MLLMs for web-related applications.
VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?
[ "Junpeng Liu", "Yifan Song", "Bill Yuchen Lin", "Wai Lam", "Graham Neubig", "Yuanzhi Li", "Xiang Yue" ]
Conference
Poster
2404.05955
[ "" ]
https://huggingface.co/papers/2404.05955
1
0
0
7
[]
[ "visualwebbench/VisualWebBench" ]
[]
1
101
null
https://openreview.net/forum?id=eJ3cHNu7ss
@inproceedings{ chen2024huatuogptii, title={Huatuo{GPT}-{II}, One-stage Training for Medical Adaption of {LLM}s}, author={Junying Chen and Xidong Wang and Ke Ji and Anningzhe Gao and Feng Jiang and Shunian Chen and Hongbo Zhang and Song Dingjie and Wenya Xie and Chuyi Kong and Jianquan Li and Xiang Wan and Haizhou Li and Benyou Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=eJ3cHNu7ss} }
Adapting a language model (LM) into a specific domain, *a.k.a* domain adaption, is a common practice when specialized knowledge, e.g. medicine, is not encapsulated in a general language model like Llama2. This typically involves a two-stage process including *continued pre-training* and *supervised fine-tuning*. Implementing a pipeline solution with these two stages not only introduces complexities (necessitating dual meticulous tuning) but also leads to two occurrences of data distribution shifts, exacerbating catastrophic forgetting. To mitigate these, we propose a one-stage domain adaption protocol where heterogeneous data from both the traditional pre-training and supervised stages are unified into a simple instruction-output pair format to achieve efficient knowledge injection. Subsequently, a data priority sampling strategy is introduced to adaptively adjust data mixture during training. Following this protocol, we train HuatuoGPT-II, a specialized LLM for the medical domain in Chinese. HuatuoGPT-II achieve competitive performance with GPT4 across multiple benchmarks, which especially shows the state-of-the-art (SOTA) performance in multiple Chinese medical benchmarks and the newest pharmacist licensure examinations. Furthermore, we explore the phenomenon of one-stage protocols, and the experiments reflect that the simplicity of the proposed protocol improves training stability and domain generalization.
HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs
[ "Junying Chen", "Xidong Wang", "Ke Ji", "Anningzhe Gao", "Feng Jiang", "Shunian Chen", "Hongbo Zhang", "Song Dingjie", "Wenya Xie", "Chuyi Kong", "Jianquan Li", "Xiang Wan", "Haizhou Li", "Benyou Wang" ]
Conference
Poster
2311.09774
[ "https://github.com/freedomintelligence/huatuogpt-ii" ]
-1
-1
-1
-1
[]
[]
[]
0
102
null
https://openreview.net/forum?id=eGCw1UVOhk
@inproceedings{ kirchenbauer2024lmd, title={{LMD}3: Language Model Data Density Dependence}, author={John Kirchenbauer and Garrett Honke and Gowthami Somepalli and Jonas Geiping and Katherine Lee and Daphne Ippolito and Tom Goldstein and David Andre}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=eGCw1UVOhk} }
We develop a methodology for analyzing language model task performance at the individual example level based on training data density estimation. Experiments with paraphrasing as a controlled intervention on finetuning data demonstrate that increasing the support in the training distribution for specific test queries results in a measurable increase in density, which is also a significant predictor of the performance increase caused by the intervention. Experiments with pretraining data demonstrate that we can explain a significant fraction of the variance in model perplexity via density measurements. We conclude that our framework can provide statistical evidence of the dependence of a target model’s predictions on subsets of its training data, and can more generally be used to characterize the support (or lack thereof) in the training data for a given test task.
LMD3: Language Model Data Density Dependence
[ "John Kirchenbauer", "Garrett Honke", "Gowthami Somepalli", "Jonas Geiping", "Katherine Lee", "Daphne Ippolito", "Tom Goldstein", "David Andre" ]
Conference
Poster
2405.06331
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
103
null
https://openreview.net/forum?id=eDWcNqiQWW
@inproceedings{ ahrabian2024the, title={The Curious Case of Nonverbal Abstract Reasoning with Multi-Modal Large Language Models}, author={Kian Ahrabian and Zhivar Sourati and Kexuan Sun and Jiarui Zhang and Yifan Jiang and Fred Morstatter and Jay Pujara}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=eDWcNqiQWW} }
While large language models (LLMs) are still being adopted to new domains and utilized in novel applications, we are experiencing an influx of the new generation of foundation models, namely multi-modal large language models (MLLMs). These models integrate verbal and visual information, opening new possibilities to demonstrate more complex reasoning abilities at the intersection of the two modalities. However, despite the revolutionizing prospect of MLLMs, our understanding of their reasoning abilities is limited. In this study, we assess the nonverbal abstract reasoning abilities of open-source and closed-source MLLMs using variations of Raven's Progressive Matrices. Our experiments reveal the challenging nature of such problems for MLLMs while showcasing the immense gap between open-source and closed-source models. We also uncover critical shortcomings of visual and textual perceptions, subjecting the models to low-performance ceilings. Finally, to improve MLLMs' performance, we experiment with different methods, such as Chain-of-Thought prompting, leading to a significant (up to 100\%) boost in performance.
The Curious Case of Nonverbal Abstract Reasoning with Multi-Modal Large Language Models
[ "Kian Ahrabian", "Zhivar Sourati", "Kexuan Sun", "Jiarui Zhang", "Yifan Jiang", "Fred Morstatter", "Jay Pujara" ]
Conference
Poster
2401.12117
[ "" ]
https://huggingface.co/papers/2401.12117
1
1
0
7
[]
[]
[]
1
104
null
https://openreview.net/forum?id=dribhnhm1i
@inproceedings{ liu2024tuning, title={Tuning Language Models by Proxy}, author={Alisa Liu and Xiaochuang Han and Yizhong Wang and Yulia Tsvetkov and Yejin Choi and Noah A. Smith}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dribhnhm1i} }
Despite the general capabilities of large pretrained language models, they consistently benefit from further adaptation to better achieve desired behaviors. However, tuning these models has become increasingly resource-intensive, or impossible when model weights are private. We introduce **proxy-tuning**, a lightweight decoding-time algorithm that operates on top of black-box LMs to achieve the same end as direct tuning, but by accessing only its predictions over the output vocabulary, not its parameters. Our method tunes a *smaller* LM, then applies the difference between the predictions of the small tuned and untuned LMs to shift the original predictions of the larger untuned model in the direction of tuning, while retaining the benefits of larger-scale pretraining. In experiments, when we apply proxy-tuning to Llama2-70B using proxies of only 7B size, we can close 88% of the gap between Llama2-70B and its truly-tuned chat version, when evaluated across knowledge, reasoning, and safety benchmarks. We then demonstrate the generality of proxy-tuning by applying it to domain adaptation on code, and task-specific finetuning on question-answering and math problems. Finally, we show how to proxy-tune a truly black-box LM, GPT-3.5, for temporal adaptation, increasing its knowledge about recent events. Our work demonstrates the promise of using small tuned LMs to efficiently customize large, potentially proprietary LMs through decoding-time guidance.
Tuning Language Models by Proxy
[ "Alisa Liu", "Xiaochuang Han", "Yizhong Wang", "Yulia Tsvetkov", "Yejin Choi", "Noah A. Smith" ]
Conference
Oral
2401.08565
[ "https://github.com/alisawuffles/proxy-tuning" ]
https://huggingface.co/papers/2401.08565
4
20
2
6
[]
[]
[]
1
105
null
https://openreview.net/forum?id=dnwRScljXr
@inproceedings{ kamoi2024evaluating, title={Evaluating {LLM}s at Detecting Errors in {LLM} Responses}, author={Ryo Kamoi and Sarkar Snigdha Sarathi Das and Renze Lou and Jihyun Janice Ahn and Yilun Zhao and Xiaoxin Lu and Nan Zhang and Yusen Zhang and Haoran Ranran Zhang and Sujeeth Reddy Vummanthala and Salika Dave and Shaobo Qin and Arman Cohan and Wenpeng Yin and Rui Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dnwRScljXr} }
With Large Language Models (LLMs) being widely used across various tasks, detecting errors in their responses is increasingly crucial. However, little research has been conducted on error detection of LLM responses. Collecting error annotations on LLM responses is challenging due to the subjective nature of many NLP tasks, and thus previous research focuses on tasks of little practical value (e.g., word sorting) or limited error types (e.g., faithfulness in summarization). This work introduces ReaLMistake, the first error detection benchmark consisting of objective, realistic, and diverse errors made by LLMs. ReaLMistake contains three challenging and meaningful tasks that introduce objectively assessable errors in four categories (reasoning correctness, instruction-following, context-faithfulness, and parameterized knowledge), eliciting naturally observed and diverse errors in responses of GPT-4 and Llama 2 70B annotated by experts. We use ReaLMistake to evaluate error detectors based on 12 LLMs. Our findings show: 1) Top LLMs like GPT-4 and Claude 3 detect errors made by LLMs at very low recall, and all LLM-based error detectors perform much worse than humans. 2) Explanations by LLM-based error detectors lack reliability. 3) LLMs-based error detection is sensitive to small changes in prompts but remains challenging to improve. 4) Popular approaches to improving LLMs, including self-consistency and majority vote, do not improve the error detection performance. Our benchmark and code are provided at https://github.com/psunlpgroup/ReaLMistake.
Evaluating LLMs at Detecting Errors in LLM Responses
[ "Ryo Kamoi", "Sarkar Snigdha Sarathi Das", "Renze Lou", "Jihyun Janice Ahn", "Yilun Zhao", "Xiaoxin Lu", "Nan Zhang", "Yusen Zhang", "Haoran Ranran Zhang", "Sujeeth Reddy Vummanthala", "Salika Dave", "Shaobo Qin", "Arman Cohan", "Wenpeng Yin", "Rui Zhang" ]
Conference
Poster
2404.03602
[ "https://github.com/psunlpgroup/realmistake" ]
https://huggingface.co/papers/2404.03602
1
1
0
15
[]
[ "ryokamoi/realmistake" ]
[]
1
106
null
https://openreview.net/forum?id=dkpeWQRmlc
@inproceedings{ he2024hdt, title={{HDT}: Hierarchical Document Transformer}, author={Haoyu He and Markus Flicke and Jan Buchmann and Iryna Gurevych and Andreas Geiger}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dkpeWQRmlc} }
In this paper, we propose the Hierarchical Document Transformer (HDT), a novel sparse Transformer architecture tailored for structured hierarchical documents. Such documents are extremely important in numerous domains, including science, law or medicine. However, most existing solutions are inefficient and fail to make use of the structure inherent to documents. HDT exploits document structure by introducing auxiliary anchor tokens and redesigning the attention mechanism into a sparse multi-level hierarchy. This approach facilitates information exchange between tokens at different levels while maintaining sparsity, thereby enhancing computational and memory efficiency while exploiting the document structure as an inductive bias. We address the technical challenge of implementing HDT's sample-dependent hierarchical attention pattern by developing a novel sparse attention kernel that considers the hierarchical structure of documents. As demonstrated by our experiments, utilizing structural information present in documents leads to faster convergence, higher sample efficiency and better performance on downstream tasks.
HDT: Hierarchical Document Transformer
[ "Haoyu He", "Markus Flicke", "Jan Buchmann", "Iryna Gurevych", "Andreas Geiger" ]
Conference
Poster
2407.08330
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
107
null
https://openreview.net/forum?id=dj9x6JuiD5
@inproceedings{ wang2024with, title={With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation}, author={Yan Wang and Dongyang Ma and Deng Cai}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dj9x6JuiD5} }
Long text generation, such as novel writing and discourse-level translation with extremely long contexts, presents significant challenges to current language models. Existing methods mainly focus on extending the model's context window through strategies like length extrapolation. However, these approaches demand substantial hardware resources during the training and/or inference phases. Our proposed method, Temp-Lora, introduces an alternative concept. Instead of relying on the KV cache to store all context information, we embeds this information directly into a temporary Lora module. In the process of long text generation, this module is progressively trained with text generated previously. This approach not only efficiently preserves contextual knowledge but also prevents any permanent alteration to the model's parameters given that the module is discarded post-generation. Extensive experiments on the PG19 language modeling benchmark and the GuoFeng discourse-level translation benchmark validate the effectiveness of Temp-Lora. Our results show that: 1) Temp-Lora substantially enhances generation quality for long text, as indicated by a 13.2\% decrease in perplexity (PPL) on a subset of PG19, and a 29.3\% decrease in PPL along with a 113.2\% increase in BLEU score on a subset of GuoFeng, 2) Temp-Lora is compatible with and enhances most existing long text generation methods, and 3) Temp-Lora can greatly reduce computational costs by shortening the context window. For example, we can ensure a moderate improvement in generation quality (a decrease of 3.8\% in PPL) while enabling a 51.5\% memory usage reduction and a 60.0\% decrease in latency for inference.
With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation
[ "Yan Wang", "Dongyang Ma", "Deng Cai" ]
Conference
Poster
2401.11504
[ "https://github.com/temporarylora/temp-lora" ]
https://huggingface.co/papers/2401.11504
0
1
0
3
[]
[]
[]
1
108
null
https://openreview.net/forum?id=didvEO1can
@inproceedings{ lin2024catcode, title={CatCode: A Comprehensive Evaluation Framework for {LLM}s On the Mixture of Code and Text}, author={Zhenru Lin and Yiqun Yao and Yang Yuan}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=didvEO1can} }
Large language models (LLMs) such as ChatGPT are increasingly proficient in understanding and generating a mixture of code and text. Evaluation based on such *mixture* can lead to a more comprehensive understanding of the models' abilities in solving coding problems. However, in this context, current evaluation methods are either limited in task coverage or lack standardization. To address this issue, we propose using category theory as a framework for evaluation. Specifically, morphisms within a code category can represent code debugging and transformation, functors between two categories represent code translation, and functors between a code category and a natural language category represent code generation, explanation, and reproduction. We present an automatic evaluation framework called **CatCode** (**Cat**egory **Code**) that can comprehensively assess the coding abilities of LLMs, including ChatGPT, Text-Davinci, and CodeGeeX. Large language models (LLMs) are increasingly proficient in understanding and generating a mixture of code and text. Evaluation based on such *mixture* can lead to a more comprehensive understanding of the models' abilities in solving coding problems. However, current evaluation methods are either limited in task coverage or lack standardization. To address this issue, we propose to apply category theory as math abstraction for code-related evaluation. Specifically, morphisms within a code category can represent code debugging and transformation, functors between two categories represent code translation, and functors between a code category and a natural language category represent code generation and explanation. We present an automatic evaluation framework called **CatCode** (**Cat**egory *Code*) that can assess the coding abilities of various ChatGPT-like LLMs in a *comprehensive* and *standard* way, and further support *composite* task evaluation. The code can be found in https://github.com/scorpio-nova/CatCode.
CatCode: A Comprehensive Evaluation Framework for LLMs On the Mixture of Code and Text
[ "Zhenru Lin", "Yiqun Yao", "Yang Yuan" ]
Conference
Poster
2403.01784
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
109
null
https://openreview.net/forum?id=dcbNzhVVQj
@inproceedings{ yao2024learning, title={Learning From Correctness Without Prompting Makes {LLM} Efficient Reasoner}, author={Yuxuan YAO and Han Wu and Zhijiang Guo and Zhou Biyan and Jiahui Gao and Sichun Luo and Hanxu Hou and Xiaojin Fu and Linqi Song}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dcbNzhVVQj} }
Large language models (LLMs) have demonstrated outstanding performance across various tasks, yet they still exhibit limitations such as hallucination, unfaithful reasoning, and toxic content. One potential approach to mitigate these issues is learning from human or external feedback (e.g. tools). In this paper, we introduce an intrinsic self-correct reasoning framework for LLMs that eliminates the need for human feedback, external tools, and handcraft prompts. The proposed framework, based on a multi-step reasoning paradigm \textbf{Le}arning from \textbf{Co}rrectness (\textsc{LeCo}), improves reasoning performance without needing to learn from errors. This paradigm prioritizes learning from correct reasoning steps, and a unique method to measure confidence for each reasoning step based on generation logits. Experimental results across various multi-step reasoning tasks demonstrate the effectiveness of the framework in improving reasoning performance with reduced token consumption.
Learning From Correctness Without Prompting Makes LLM Efficient Reasoner
[ "Yuxuan YAO", "Han Wu", "Zhijiang Guo", "Zhou Biyan", "Jiahui Gao", "Sichun Luo", "Hanxu Hou", "Xiaojin Fu", "Linqi Song" ]
Conference
Poster
2403.19094
[ "https://github.com/starrYYxuan/LeCo" ]
https://huggingface.co/papers/2403.19094
0
0
0
9
[]
[]
[]
1
110
null
https://openreview.net/forum?id=dWYRjT501w
@inproceedings{ bronzini2024unveiling, title={Unveiling {LLM}s: The Evolution of Latent Representations in a Dynamic Knowledge Graph}, author={Marco Bronzini and Carlo Nicolini and Bruno Lepri and Jacopo Staiano and Andrea Passerini}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dWYRjT501w} }
Large Language Models (LLMs) demonstrate an impressive capacity to recall a vast range of factual knowledge. However, understanding their underlying reasoning and internal mechanisms in exploiting this knowledge remains a key research area. This work unveils the factual information an LLM represents internally for sentence-level claim verification. We propose an end-to-end framework to decode factual knowledge embedded in token representations from a vector space to a set of ground predicates, showing its layer-wise evolution using a dynamic knowledge graph. Our framework employs activation patching, a vector-level technique that alters a token representation during inference, to extract encoded knowledge. Accordingly, we neither rely on training nor external models. Using factual and common-sense claims from two claim verification datasets, we showcase interpretability analyses at local and global levels. The local analysis highlights entity centrality in LLM reasoning, from claim-related information and multi-hop reasoning to representation errors causing erroneous evaluation. On the other hand, the global reveals trends in the underlying evolution, such as word-based knowledge evolving into claim-related facts. By interpreting semantics from LLM latent representations and enabling graph-related analyses, this work enhances the understanding of the factual knowledge resolution process.
Unveiling LLMs: The Evolution of Latent Representations in a Dynamic Knowledge Graph
[ "Marco Bronzini", "Carlo Nicolini", "Bruno Lepri", "Jacopo Staiano", "Andrea Passerini" ]
Conference
Poster
2404.03623
[ "https://github.com/Ipazia-AI/latent-explorer" ]
-1
-1
-1
-1
[]
[]
[]
0
111
null
https://openreview.net/forum?id=dJfBejh478
@inproceedings{ yao2024scalable, title={Scalable Model Editing via Customized Expert Networks}, author={Zihan Yao and Yu He and Tianyu Qi and Ming Li}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dJfBejh478} }
Addressing the issues of hallucinations and outdated knowledge in large language models is critical for their reliable application. Model Editing presents a promising avenue for mitigating these challenges in a costeffective manner. However, existing methods often suffer from unsatisfactory generalization and unintended effects on non-edited samples. To overcome these limitations, we introduce a novel approach: Scalable Model Editing via Customized Expert Networks (SCEN), which is a two-stage continuous training paradigm. Specifically, in the first stage, we train lightweight expert networks individually for each piece of knowledge that needs to be updated. Subsequently, we train a corresponding indexing neuron for each expert to control the activation state of that expert. We conducted a series of experiments on the ZsRE and Hallucination benchmarks by tuning the advanced open-source LLM, Llama2, achieving state-of-theart results compared to current mainstream methods. Our code is available at https://github.com/TAL-auroraX/SCEN.
Scalable Model Editing via Customized Expert Networks
[ "Zihan Yao", "Yu He", "Tianyu Qi", "Ming Li" ]
Conference
Poster
2404.02699
[ "https://github.com/tal-aurorax/scen" ]
-1
-1
-1
-1
[]
[]
[]
0
112
null
https://openreview.net/forum?id=dJMTn3QOWO
@inproceedings{ mishra2024finegrained, title={Fine-grained Hallucination Detection and Editing for Language Models}, author={Abhika Mishra and Akari Asai and Vidhisha Balachandran and Yizhong Wang and Graham Neubig and Yulia Tsvetkov and Hannaneh Hajishirzi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=dJMTn3QOWO} }
Large language models (LMs) are prone to generate factual errors, which are often called hallucinations. In this paper, we introduce a comprehensive taxonomy of hallucinations and argue that hallucinations manifest in diverse forms, each requiring varying degrees of careful assessments to verify factuality. We propose a novel task of automatic fine-grained hallucination detection and construct a new evaluation benchmark, FavaBench, that includes about one thousand fine-grained human judgments on three LM outputs across various domains. Our analysis reveals that ChatGPT and Llama2-Chat (70B, 7B) exhibit diverse types of hallucinations in the majority of their outputs in information-seeking scenarios. We train FAVA, a retrieval-augmented LM by carefully creating synthetic data to detect and correct fine-grained hallucinations. On our benchmark, our automatic and human evaluations show that FAVA significantly outperforms ChatGPT and GPT-4 on fine-grained hallucination detection, and edits suggested by FAVA improve the factuality of LM-generated text.
Fine-grained Hallucination Detection and Editing for Language Models
[ "Abhika Mishra", "Akari Asai", "Vidhisha Balachandran", "Yizhong Wang", "Graham Neubig", "Yulia Tsvetkov", "Hannaneh Hajishirzi" ]
Conference
Poster
2401.06855
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
113
null
https://openreview.net/forum?id=cKBmZ2PZ6c
@inproceedings{ xiao2024orag, title={{ORAG}: Ontology-Guided Retrieval-Augmented Generation for Theme-Specific Entity Typing}, author={Jinfeng Xiao and Linyi Ding and James Barry and Mohab Elkaref and Geeth De Mel and Jiawei Han}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=cKBmZ2PZ6c} }
Large language models (LLMs) incorporated with retrieval-augmented generation (RAG) have shown great power in many NLP tasks, including fine-grained entity typing (FET). However, we observe that recent LLMs can easily suffer from hallucinations on highly specialized and fast-evolving themes (e.g., redox-active organic electrode materials), especially in the following cases: (1) unseen entities: an entity never appears in the pre-training corpora of LLMs; and (2) misleading semantics: the context of an entity can potentially mislead an entity typing algorithm if the relevant knowledge is not correctly retrieved and utilized. To address these challenges, this paper proposes an Ontology-Guided Retrieval-Augmented Generation (ORAG) approach that incorporates ontology structures with RAG for the theme-specific entity typing task. ORAG first enriches the label ontology with external knowledge and constructs a structured knowledge unit for each node. Then, it retrieves the relevant nodes by dense passage retrieval and expands the retrieved results based on the ontological structure. In this way, more supporting knowledge will be retrieved within the limited input of LLMs for entity typing. In the evaluation, we construct a dataset with two themes for theme-specific entity typing with a focus on unseen entities and misleading semantics. We observe notable cases of hallucination when vanilla RAG is applied to Llama-3, GPT-3.5, and GPT-4, while ORAG can effectively mitigate such hallucinations and improve the results.
ORAG: Ontology-Guided Retrieval-Augmented Generation for Theme-Specific Entity Typing
[ "Jinfeng Xiao", "Linyi Ding", "James Barry", "Mohab Elkaref", "Geeth De Mel", "Jiawei Han" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
114
null
https://openreview.net/forum?id=cG1EbmWiSs
@inproceedings{ huang2024unified, title={Unified View of Grokking, Double Descent and Emergent Abilities: A Comprehensive Study on Algorithm Task}, author={Yufei Huang and Shengding Hu and Xu Han and Zhiyuan Liu and Maosong Sun}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=cG1EbmWiSs} }
Recent studies have uncovered intriguing phenomena in deep learning, such as *grokking*, *double descent*, and *emergent abilities* in large language models, which challenge human intuition and are crucial for a deeper understanding of neural models. In this paper, we present a comprehensive study on algorithm task to provide a unified view of these three phenomena, with a focus on the interplay between memorization and generalization. Through extensive experiments spanning a wide range of model sizes and training data quantities, we uncover four distinct training dynamics, each arising from unique combinations of model size and training data quantity, formulating a theoretical framework for further analysis. Utilizing this framework, we establish connections between *double descent* and *grokking* and propose two verifiable predictions regarding the occurrence of *double descent*, both substantiated by our experimental results. Moreover, we expand our experiments to the multi-task learning paradigm, demonstrating how algorithm tasks can be turned into emergent abilities by mixing some pure memorization data. This offers a novel perspective to understand *emergent abilities* in Large Language Models.
Unified View of Grokking, Double Descent and Emergent Abilities: A Comprehensive Study on Algorithm Task
[ "Yufei Huang", "Shengding Hu", "Xu Han", "Zhiyuan Liu", "Maosong Sun" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
115
null
https://openreview.net/forum?id=c30qeMg8dv
@inproceedings{ nahar2024fakes, title={Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding {LLM} Hallucinations}, author={Mahjabin Nahar and Haeseung Seo and Eun-Ju Lee and Aiping Xiong and Dongwon Lee}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=c30qeMg8dv} }
The widespread adoption and transformative effects of large language models (LLMs) have sparked concerns regarding their capacity to produce inaccurate and fictitious content, referred to as `hallucinations'. Given the potential risks associated with hallucinations, humans should be able to identify them. This research aims to understand the human perception of LLM hallucinations by systematically varying the degree of hallucination (genuine, minor hallucination, major hallucination) and examining its interaction with warning (i.e., a warning of potential inaccuracies: absent vs. present). Participants ($N=419$) from Prolific rated the perceived accuracy and engaged with content (e.g., like, dislike, share) in a Q/A format. Results indicate that humans rank content as truthful in the order genuine > minor hallucination > major hallucination and user engagement behaviors mirror this pattern. More importantly, we observed that warning improves hallucination detection without significantly affecting the perceived truthfulness of genuine content. We conclude by offering insights for future tools to aid human detection of hallucinations.
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations
[ "Mahjabin Nahar", "Haeseung Seo", "Eun-Ju Lee", "Aiping Xiong", "Dongwon Lee" ]
Conference
Poster
2404.03745
[ "https://github.com/mahjabinnahar/fakes-of-varying-shades-survey-materials" ]
-1
-1
-1
-1
[]
[]
[]
0
116
null
https://openreview.net/forum?id=bwo3GVsgOv
@inproceedings{ wagner2024personalized, title={Personalized Collaborative Fine-Tuning for On-Device Large Language Models}, author={Nicolas Wagner and Dongyang Fan and Martin Jaggi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=bwo3GVsgOv} }
We explore on-device collaborative fine-tuning of large language models under limited local data availability. We introduce three distinct dynamic collaborator selection schemes, allowing trust-weighted personalized update aggregation: model-similarity-based, prediction-similarity-based and validation-performance-based. To minimize communication overhead, we integrate Low-Rank Adaptation (LoRA) and only exchange LoRA model updates. Our protocols, driven by prediction and performance metrics, surpass both FedAvg and local fine-tuning methods, which is particularly evident in realistic distributed scenarios with more diverse local data distributions. The results underscore the effectiveness of our approach in addressing heterogeneity and scarcity of the local datasets.
Personalized Collaborative Fine-Tuning for On-Device Large Language Models
[ "Nicolas Wagner", "Dongyang Fan", "Martin Jaggi" ]
Conference
Poster
2404.09753
[ "https://github.com/epfml/personalized-collaborative-llms" ]
-1
-1
-1
-1
[]
[]
[]
0
117
null
https://openreview.net/forum?id=bttKwCZDkm
@inproceedings{ saxon2024benchmarks, title={Benchmarks as Microscopes: A Call for Model Metrology}, author={Michael Saxon and Ari Holtzman and Peter West and William Yang Wang and Naomi Saphra}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=bttKwCZDkm} }
Modern language models (LMs) pose a new challenge in capability assessment. Static benchmarks inevitably saturate without providing confidence in the deployment tolerances of LM-based systems, but developers nonetheless claim that their models have generalized traits such as reasoning or open-domain language understanding based on these flawed metrics. The science and practice of LMs requires a new approach to benchmarking which measures specific capabilities with dynamic assessments. To be confident in our metrics, we need a new discipline of *model metrology*---one which focuses on how to generate benchmarks that predict performance under deployment. Motivated by our evaluation criteria, we outline how building a community of model metrology practitioners---one focused on building tools and studying how to measure system capabilities---is the best way to meet these needs to and add clarity to the AI discussion.
Benchmarks as Microscopes: A Call for Model Metrology
[ "Michael Saxon", "Ari Holtzman", "Peter West", "William Yang Wang", "Naomi Saphra" ]
Conference
Poster
2407.16711
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
118
null
https://openreview.net/forum?id=bo4pauxnIR
@inproceedings{ nam2024tabular, title={Tabular Transfer Learning via Prompting {LLM}s}, author={Jaehyun Nam and Woomin Song and Seong Hyeon Park and Jihoon Tack and Sukmin Yun and Jaehyung Kim and Kyu Hwan Oh and Jinwoo Shin}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=bo4pauxnIR} }
Learning with a limited number of labeled data is a central problem in real-world applications of machine learning, as it is often expensive to obtain annotations. To deal with the scarcity of labeled data, transfer learning is a conventional approach; it suggests to learn a transferable knowledge by training a neural network from multiple other sources. In this paper, we investigate transfer learning of tabular tasks, which has been less studied and successful in the literature, compared to other domains, e.g., vision and language. This is because tables are inherently heterogeneous, i.e., they contain different columns and feature spaces, making transfer learning difficult. On the other hand, recent advances in natural language processing suggest that the label scarcity issue can be mitigated by utilizing in-context learning capability of large language models (LLMs). Inspired by this and the fact that LLMs can also process tables within a unified language space, we ask whether LLMs can be effective for tabular transfer learning, in particular, under the scenarios where the source and target datasets are of different format. As a positive answer, we propose a novel tabular transfer learning framework, coined Prompt to Transfer (P2T), that utilizes unlabeled (or heterogeneous) source data with LLMs. Specifically, P2T identifies a column feature in a source dataset that is strongly correlated with a target task feature to create examples relevant to the target task, thus creating pseudo-demonstrations for prompts. Experimental results demonstrate that P2T outperforms previous methods on various tabular learning benchmarks, showing good promise for the important, yet underexplored tabular transfer learning problem. Code is available at https://github.com/jaehyun513/P2T.
Tabular Transfer Learning via Prompting LLMs
[ "Jaehyun Nam", "Woomin Song", "Seong Hyeon Park", "Jihoon Tack", "Sukmin Yun", "Jaehyung Kim", "Kyu Hwan Oh", "Jinwoo Shin" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
119
null
https://openreview.net/forum?id=bnscREWUuc
@inproceedings{ richburg2024how, title={How Multilingual are Large Language Models Fine-tuned for Translation?}, author={Aquia Richburg and Marine Carpuat}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=bnscREWUuc} }
A new paradigm for machine translation has recently emerged: fine-tuning large language models on parallel text has been shown to outperform dedicated translation systems trained in a supervised fashion on much larger amounts of parallel data (Xu et al. 2024, Alves et al. 2024). However, it remains unclear whether this paradigm can enable massively multilingual machine translation or whether it requires fine-tuning dedicated models for a small number of language pairs. How does translation fine-tuning impact the MT capabilities of LLMs for zero-shot languages, zero-shot language pairs, and translation tasks that do not involve English? To address these questions, we conduct an extensive empirical evaluation of the translation quality of the TOWER family of language models (Alves et al. 2024) on 132 translation tasks from the multi-parallel FLORES data. We find that translation fine-tuning improves translation quality even for zero-shot languages on average, but that the impact is uneven depending on the language pairs involved. These results call for further research to effectively enable massively multilingual translation with LLMs.
How Multilingual are Large Language Models Fine-tuned for Translation?
[ "Aquia Richburg", "Marine Carpuat" ]
Conference
Poster
2405.20512
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
120
null
https://openreview.net/forum?id=bkY8zEDdH9
@inproceedings{ xiao2024od, title={O3D: Offline Data-driven Discovery and Distillation for Sequential Decision-Making with Large Language Models}, author={Yuchen Xiao and Yanchao Sun and Mengda Xu and Udari Madhushani Sehwag and Jared Vann and Deepeka Garg and Sumitra Ganesh}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=bkY8zEDdH9} }
Recent advancements in large language models (LLMs) have exhibited promising performance in solving sequential decision-making problems. By imitating few-shot examples provided in the prompts (i.e., in-context learning), an LLM agent can interact with an external environment and complete given tasks without additional training. However, such few-shot examples are often insufficient to generate high-quality solutions for complex and long-horizon tasks, while the limited context length cannot consume larger-scale demonstrations with long interaction horizons. To this end, we propose an offline learning framework that utilizes offline data at scale (e.g, logs of human interactions) to improve LLM-powered policies without fine-tuning. The proposed method O3D (Offline Data-driven Discovery and Distillation) automatically discovers reusable skills and distills generalizable knowledge across multiple tasks based on offline interaction data, advancing the capability of solving downstream tasks. Empirical results under two interactive decision-making benchmarks (ALFWorld and WebShop) verify that O3D can notably enhance the decision-making capabilities of LLMs through the offline discovery and distillation process, and consistently outperform baselines across various LLMs.
O3D: Offline Data-driven Discovery and Distillation for Sequential Decision-Making with Large Language Models
[ "Yuchen Xiao", "Yanchao Sun", "Mengda Xu", "Udari Madhushani Sehwag", "Jared Vann", "Deepeka Garg", "Sumitra Ganesh" ]
Conference
Poster
2310.14403
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
121
null
https://openreview.net/forum?id=b0y6fbSUG0
@inproceedings{ hao2024llm, title={{LLM} Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models}, author={Shibo Hao and Yi Gu and Haotian Luo and Tianyang Liu and Xiyan Shao and Xinyuan Wang and Shuhua Xie and Haodi Ma and Adithya Samavedhi and Qiyue Gao and Zhen Wang and Zhiting Hu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=b0y6fbSUG0} }
Reasoning is a pivotal skill in the evolution of Large Language Models (LLMs), and constructing step-by-step reasoning chains is essential for enhancing their reasoning abilities. Despite a rich array of recent research aimed at deriving improved reasoning chains from LLMs, two major challenges hinder the progress in this field: the lack of effective methods to evaluate reasoning chains, and the absence of systematic analysis of reasoning algorithms. In this work, we introduce RICE, a novel LLM-based approach for automated evaluation of reasoning chains, which autonomously constructs a detailed evaluation criteria list to help itself recognize intermediate reason- ing mistakes. This fully automatic method proves to be more precise than existing metrics and offers a complementary angle to conventional answer-based evaluations. For the second challenge, we present a formulation that connects extensive existing reasoning algorithms. LLM Reasoners, a modular library for step-by-step reasoning algorithms, is developed based on the formulation. It enables users to specify problem domains and reasoning strategies with minimal effort. With the help of the new metric and library, we make a comprehensive study of the factors contributing to a reasoning algorithm, including the reward, the exploration strategy, the world model, and the prompt format, with interesting findings unveiled through RICE.
LLM Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models
[ "Shibo Hao", "Yi Gu", "Haotian Luo", "Tianyang Liu", "Xiyan Shao", "Xinyuan Wang", "Shuhua Xie", "Haodi Ma", "Adithya Samavedhi", "Qiyue Gao", "Zhen Wang", "Zhiting Hu" ]
Conference
Poster
2404.05221
[ "https://github.com/maitrix-org/llm-reasoners" ]
https://huggingface.co/papers/2404.05221
1
1
0
12
[]
[]
[]
1
122
null
https://openreview.net/forum?id=av0D19pSkU
@inproceedings{ duan2024do, title={Do Membership Inference Attacks Work on Large Language Models?}, author={Michael Duan and Anshuman Suri and Niloofar Mireshghallah and Sewon Min and Weijia Shi and Luke Zettlemoyer and Yulia Tsvetkov and Yejin Choi and David Evans and Hannaneh Hajishirzi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=av0D19pSkU} }
Membership inference attacks (MIAs) attempt to predict whether a particular datapoint is a member of a target model's training data. Despite extensive research on traditional machine learning models, there has been limited work studying MIA on the pre-training data of large language models (LLMs). We perform a large-scale evaluation of MIAs over a suite of language models (LMs) trained on the Pile, ranging from 160M to 12B parameters. We find that MIAs barely outperform random guessing for most settings across varying LLM sizes and domains. Further analyses reveal that this poor performance can be attributed to (1) the combination of a large dataset and few training iterations, and (2) an inherently fuzzy boundary between members and non-members. We also find that, when LLMs have been shown to be vulnerable to MIAs, this apparent success can be attributed to a distribution shift, e.g., members and non-members are seemingly drawn from identical domain but with different temporal ranges. Finally, we observe that existing MIAs are highly sensitive to even small changes in a sample. Such changes may cause samples that are lexically or semantically similar to members to be classified as non-members, which may be at odds with leakage that privacy auditors care about. We release our code and data as a unified benchmark package that includes all existing MIAs, supporting future work.
Do Membership Inference Attacks Work on Large Language Models?
[ "Michael Duan", "Anshuman Suri", "Niloofar Mireshghallah", "Sewon Min", "Weijia Shi", "Luke Zettlemoyer", "Yulia Tsvetkov", "Yejin Choi", "David Evans", "Hannaneh Hajishirzi" ]
Conference
Poster
2402.07841
[ "https://github.com/iamgroot42/mimir" ]
-1
-1
-1
-1
[]
[]
[]
0
123
null
https://openreview.net/forum?id=amhPBLFYWv
@inproceedings{ michaelov2024revenge, title={Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics}, author={James Michaelov and Catherine Arnett and Ben Bergen}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=amhPBLFYWv} }
Transformers have generally supplanted recurrent neural networks as the dominant architecture for both natural language processing tasks and for modelling the effect of predictability on online human language comprehension. However, two recently developed recurrent model architectures, RWKV and Mamba, appear to perform natural language tasks comparably to or better than transformers of equivalent scale. In this paper, we show that contemporary recurrent models are now also able to match—and in some cases, exceed—performance of comparably sized transformers at modeling online human language comprehension. This suggests that transformer language models are not uniquely suited to this task, and opens up new directions for debates about the extent to which architectural features of language models make them better or worse models of human language comprehension.
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
[ "James Michaelov", "Catherine Arnett", "Ben Bergen" ]
Conference
Poster
2404.19178
[ "https://github.com/jmichaelov/recurrent-vs-transformer-modeling" ]
-1
-1
-1
-1
[]
[]
[]
0
124
null
https://openreview.net/forum?id=aajyHYjjsk
@inproceedings{ marks2024the, title={The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets}, author={Samuel Marks and Max Tegmark}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=aajyHYjjsk} }
Large Language Models (LLMs) have impressive capabilities, but are prone to outputting falsehoods. Recent work has developed techniques for inferring whether a LLM is telling the truth by training probes on the LLM's internal activations. However, this line of work is controversial, with some authors pointing out failures of these probes to generalize in basic ways, among other conceptual issues. In this work, we use high-quality datasets of simple true/false statements to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgically intervening in a LLM's forward pass, causing it to treat false statements as true and vice versa. Overall, we present evidence that at sufficient scale, LLMs *linearly represent* the truth or falsehood of factual statements. We also show that simple difference-in-mean probes generalize as well as other probing techniques while identifying directions which are more causally implicated in model outputs.
The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets
[ "Samuel Marks", "Max Tegmark" ]
Conference
Poster
2310.06824
[ "https://github.com/saprmarks/geometry-of-truth" ]
-1
-1
-1
-1
[]
[]
[]
0
125
null
https://openreview.net/forum?id=aKwQPRjdGa
@inproceedings{ wu2024hummer, title={Hummer: Towards Limited Competitive Preference Dataset}, author={Yusen Wu and Li Jiang and Junwu Xiong and Jingqing Ruan and Yichuan Ding and Qingpei Guo and zujie wen and JUN ZHOU and Xiaotie Deng}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=aKwQPRjdGa} }
Preference datasets are essential for incorporating human preferences into pre-trained language models, playing a key role in the success of Reinforcement Learning from Human Feedback. However, these datasets often demonstrate conflicting alignment objectives, leading to increased vulnerability to jailbreak attacks and challenges in adapting downstream tasks to prioritize specific alignment objectives without negatively impacting others. In this work, we introduce a novel statistical metric, Alignment Dimension Conflict, to quantify the degree of conflict within preference datasets. We then present \texttt{Hummer} and its fine-grained variant, \texttt{Hummer-F}, as innovative pairwise preference datasets with reduced-conflict alignment objectives. \texttt{Hummer} is built based on UltraFeedback and is enhanced by AI feedback from GPT-4, marking as the first preference dataset aimed at reducing the competition between alignment objectives. Furthermore, we develop reward models, \texttt{HummerRM} and \texttt{HummerRM-F}, which employ a hybrid sampling approach to balance diverse alignment objectives effectively. This sampling method positions \texttt{HummerRM} as an ideal model for domain-specific further fine-tuning and reducing vulnerability to jailbreak attacks.
Hummer: Towards Limited Competitive Preference Dataset
[ "Yusen Wu", "Li Jiang", "Junwu Xiong", "Jingqing Ruan", "Yichuan Ding", "Qingpei Guo", "zujie wen", "JUN ZHOU", "Xiaotie Deng" ]
Conference
Poster
2405.11647
[ "" ]
https://huggingface.co/papers/2405.11647
0
0
0
9
[]
[ "sarinw-2024/Hummer" ]
[]
1
126
null
https://openreview.net/forum?id=aKkAwZB6JV
@inproceedings{ tunstall2024zephyr, title={Zephyr: Direct Distillation of {LM} Alignment}, author={Lewis Tunstall and Edward Emanuel Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro Von Werra and Cl{\'e}mentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M Rush and Thomas Wolf}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=aKkAwZB6JV} }
We aim to produce a smaller language model that is aligned to user intent. Previous research has shown that applying distilled supervised fine-tuning (dSFT) on larger models significantly improves task accuracy; however, these models are unaligned, i.e. they do not respond well to natural prompts. To distill this property, we experiment with the use of preference data from AI Feedback (AIF). Starting from a dataset of outputs ranked by a teacher model, we apply distilled direct preference optimization (dDPO) to learn a chat model with significantly improved intent alignment. The approach requires only a few hours of training without any additional sampling during fine-tuning. The final result, Zephyr-7B, set a new state-of-the-art on chat benchmarks for 7B parameter models, and requires no human annotation. In particular, results on MT-Bench show that Zephyr-7B surpassed Llama2-Chat-70B, at the time the best open-access RLHF-based model.
Zephyr: Direct Distillation of LM Alignment
[ "Lewis Tunstall", "Edward Emanuel Beeching", "Nathan Lambert", "Nazneen Rajani", "Kashif Rasul", "Younes Belkada", "Shengyi Huang", "Leandro Von Werra", "Clémentine Fourrier", "Nathan Habib", "Nathan Sarrazin", "Omar Sanseviero", "Alexander M Rush", "Thomas Wolf" ]
Conference
Poster
2310.16944
[ "https://github.com/huggingface/alignment-handbook" ]
-1
-1
-1
-1
[]
[]
[]
0
127
null
https://openreview.net/forum?id=Zu8OWNUC0u
@inproceedings{ fehr2024nonparametric, title={Nonparametric Variational Regularisation of Pretrained Transformers}, author={Fabio James Fehr and James Henderson}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Zu8OWNUC0u} }
Pretrained transformers have demonstrated impressive abilities, but tend not to generalise well out-of-domain and are very expensive to fine-tune on new domain data. Nonparametric Variational Information Bottleneck (NVIB) has been proposed as a regulariser for training cross-attention in transformers, potentially addressing this domain overfitting problem. We extend the NVIB framework to replace all types of attention functions in transformers. We show that existing pretrained transformers can be reinterpreted as nonparametric variational models using an empirical prior distribution and identity initialisation with controllable hyperparameters. We then show that changing the initialisation introduces a novel, information-theoretic post-training regularisation in the attention mechanism, which improves out-of-domain generalisation on NLP tasks without any additional training. This success supports the hypothesis that the way pretrained transformer embeddings represent information is accurately characterised by nonparametric variational Bayesian models.
Nonparametric Variational Regularisation of Pretrained Transformers
[ "Fabio James Fehr", "James Henderson" ]
Conference
Poster
2312.00662
[ "" ]
https://huggingface.co/papers/2312.00662
1
1
0
2
[]
[]
[]
1
128
null
https://openreview.net/forum?id=Zt1dwG8xrK
@inproceedings{ hron2024training, title={Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability}, author={Jiri Hron and Laura A Culp and Gamaleldin Fathy Elsayed and Rosanne Liu and Jasper Snoek and Simon Kornblith and Alex Rizkowsky and Isabelle Simpson and Jascha Sohl-Dickstein and Noah Fiedel and Aaron T Parisi and Alexander A Alemi and Azade Nova and Ben Adlam and Bernd Bohnet and Gaurav Mishra and Hanie Sedghi and Izzeddin Gur and Jaehoon Lee and John D Co-Reyes and Kathleen Kenealy and Kelvin Xu and Kevin Swersky and Igor Mordatch and Lechao Xiao and Maxwell Bileschi and Peter J Liu and Roman Novak and Sharad Vikram and Tris Warkentin and Jeffrey Pennington}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Zt1dwG8xrK} }
While many capabilities of language models (LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood. Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations where a correct answer appears verbatim in the training set. To fully control the training data content, we construct a knowledge graph (KG)-based dataset, and use it to train a set of increasingly large LMs. We find that fora fixed dataset, larger and longer-trained LMs hallucinate less. However, hallucinating on≤5% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, than Hoffmann et al. (2022) reported was optimal. Given this costliness, we study how hallucination detectors depend on scale. While we see detector size improves performance on fixed LM’s outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations.
Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
[ "Jiri Hron", "Laura A Culp", "Gamaleldin Fathy Elsayed", "Rosanne Liu", "Jasper Snoek", "Simon Kornblith", "Alex Rizkowsky", "Isabelle Simpson", "Jascha Sohl-Dickstein", "Noah Fiedel", "Aaron T Parisi", "Alexander A Alemi", "Azade Nova", "Ben Adlam", "Bernd Bohnet", "Gaurav Mishra", "Hanie Sedghi", "Izzeddin Gur", "Jaehoon Lee", "John D Co-Reyes", "Kathleen Kenealy", "Kelvin Xu", "Kevin Swersky", "Igor Mordatch", "Lechao Xiao", "Maxwell Bileschi", "Peter J Liu", "Roman Novak", "Sharad Vikram", "Tris Warkentin", "Jeffrey Pennington" ]
Conference
Poster
2408.07852
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
129
null
https://openreview.net/forum?id=Zq9Dfj4nBo
@inproceedings{ weiss2024redesigning, title={Redesigning Information Markets in the Era of Language Models}, author={Martin Weiss and Nasim Rahaman and Manuel Wuthrich and Yoshua Bengio and Li Erran Li and Bernhard Sch{\"o}lkopf and Christopher Pal}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Zq9Dfj4nBo} }
Information markets face many challenges leading to instability, inefficiency, and failure, ultimately reducing incentives for the creation and distribution of high-quality information. A long-standing issue for information markets is the Buyer's Inspection Paradox: buyers need to inspect information to assess its value, while sellers must limit inspection to prevent unauthorized use or theft. This paradox results from the information asymmetry present in the market, where sellers know more about the quality of their goods than buyers. This work proposes an information market design that leverages language models to mitigate the Buyer's Inspection Paradox by enabling inspection, comparison, and purchase of information, while algorithmically preventing expropriation. Our experiments (a) show methods that improve the economic rationality of language models, (b) investigate how language model behaviour changes with the price of goods, and (c) evaluate the simulated cost-efficiency of the proposed market under various conditions.
Redesigning Information Markets in the Era of Language Models
[ "Martin Weiss", "Nasim Rahaman", "Manuel Wuthrich", "Yoshua Bengio", "Li Erran Li", "Bernhard Schölkopf", "Christopher Pal" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
130
null
https://openreview.net/forum?id=Zb0ajZ7vAt
@inproceedings{ shnitzer2024large, title={Large Language Model Routing with Benchmark Datasets}, author={Tal Shnitzer and Anthony Ou and M{\'\i}rian Silva and Kate Soule and Yuekai Sun and Justin Solomon and Neil Thompson and Mikhail Yurochkin}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Zb0ajZ7vAt} }
The number of open-source Large Language Models (LLMs) grows daily, as does the number of available benchmark datasets used to evaluate LLMs. While some models dominate these benchmarks, no single model achieves the best accuracy in all tasks and use cases. In light of this observation, we address the challenge of selecting the best LLM from a collection of pre-trained models, given a new task. While related work relies on evaluating each candidate model on a set of labeled examples, our new formulation does not assume any labeled data from the new task is available. Instead, we repurpose a collection of benchmark datasets---which may focus on different tasks than the one at hand---to learn a ''router'' model for LLM selection from inputs only; this problem reduces to a collection of binary classification tasks. Empirically, our strategy consistently improves performance over using any single model for all tasks.
Large Language Model Routing with Benchmark Datasets
[ "Tal Shnitzer", "Anthony Ou", "Mírian Silva", "Kate Soule", "Yuekai Sun", "Justin Solomon", "Neil Thompson", "Mikhail Yurochkin" ]
Conference
Poster
2309.15789
[ "" ]
https://huggingface.co/papers/2309.15789
2
1
0
8
[]
[]
[]
1
131
null
https://openreview.net/forum?id=ZZzXpyv65G
@inproceedings{ ye2024language, title={Language Models as Critical Thinking Tools: A Case Study of Philosophers}, author={Andre Ye and Jared Moore and Rose Novick and Amy X Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ZZzXpyv65G} }
Current work in language models (LMs) helps us speed up or even skip thinking by accelerating and automating cognitive work. But can LMs help us with critical thinking -- thinking in deeper, more reflective ways which challenge assumptions, clarify ideas, and engineer new concepts? We treat philosophy as a case study in critical thinking, and interview 21 professional philosophers about how they engage in critical thinking and on their experiences with LMs. We find that philosophers do not find LMs to be useful because they lack a sense of selfhood (memory, beliefs, consistency) and initiative (curiosity, proactivity). We propose the selfhood-initiative model for critical thinking tools to characterize this gap. Using the model, we formulate three roles LMs could play as critical thinking tools: the Interlocutor, the Monitor, and the Respondent. We hope that our work inspires LM researchers to further develop LMs as critical thinking tools and philosophers and other `critical thinkers' to imagine intellectually substantive uses of LMs.
Language Models as Critical Thinking Tools: A Case Study of Philosophers
[ "Andre Ye", "Jared Moore", "Rose Novick", "Amy X Zhang" ]
Conference
Poster
2404.04516
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
132
null
https://openreview.net/forum?id=ZDdLamBX4P
@inproceedings{ narayan2024cookbook, title={Cookbook: A framework for improving {LLM} generative abilities via programmatic data generating templates}, author={Avanika Narayan and Mayee F Chen and Kush Bhatia and Christopher Re}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=ZDdLamBX4P} }
Fine-tuning large language models (LLMs) on instruction datasets is a common way to improve their generative capabilities. However, instruction datasets can be expensive and time-consuming to manually curate, and while LLM-generated data is less labor-intensive, it may violate user privacy agreements or terms of service of LLM providers. Therefore, we seek a way of constructing instruction datasets with samples that are not generated by humans or LLMs but still improve LLM generative capabilities. In this work, we introduce Cookbook, a framework that programmatically generates training data consisting of simple patterns over random tokens, resulting in a scalable, cost-effective approach that avoids legal and privacy issues. First, Cookbook uses a template---a data generating Python function---to produce training data that encourages the model to learn an explicit pattern-based rule that corresponds to a desired task. We find that fine-tuning on Cookbook-generated data is able to improve performance on its corresponding task by up to 52.7 accuracy points. Second, since instruction datasets improve performance on multiple downstream tasks simultaneously, Cookbook algorithmically learns how to mix data from various templates to optimize performance on multiple tasks. On the standard multi-task GPT4ALL evaluation suite, Mistral-7B fine-tuned using a Cookbook-generated dataset attains the best accuracy on average compared to other 7B parameter instruction-tuned models and is the best performing model on 3 out of 8 tasks. Finally, we analyze when and why Cookbook improves performance and present a metric that allows us to verify that the improvement is largely explained by the model’s generations adhering better to template rules.
Cookbook: A framework for improving LLM generative abilities via programmatic data generating templates
[ "Avanika Narayan", "Mayee F Chen", "Kush Bhatia", "Christopher Re" ]
Conference
Poster
2410.05224
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
133
null
https://openreview.net/forum?id=YwrNePfb3E
@inproceedings{ feffer2024prompt, title={Prompt Exploration with Prompt Regression}, author={Michael Feffer and Ronald Xu and Yuekai Sun and Mikhail Yurochkin}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=YwrNePfb3E} }
In the advent of democratized usage of large language models (LLMs), there is a growing desire to systematize LLM prompt creation and selection processes beyond iterative trial-and-error. Prior works majorly focus on searching the space of prompts without accounting for relations between prompt variations. Here we propose a framework, Prompt Exploration with Prompt Regression (PEPR), to predict the effect of prompt combinations given results for individual prompt elements as well as a simple method to select an effective prompt for a given use-case. We evaluate our approach with open-source LLMs of different sizes on several different tasks.
Prompt Exploration with Prompt Regression
[ "Michael Feffer", "Ronald Xu", "Yuekai Sun", "Mikhail Yurochkin" ]
Conference
Poster
2405.11083
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
134
null
https://openreview.net/forum?id=YfHxQSoaWU
@inproceedings{ kim2024fables, title={{FABLES}: Evaluating faithfulness and content selection in book-length summarization}, author={Yekyung Kim and Yapei Chang and Marzena Karpinska and Aparna Garimella and Varun Manjunatha and Kyle Lo and Tanya Goyal and Mohit Iyyer}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=YfHxQSoaWU} }
While long-context large language models (LLMs) can technically summarize book-length documents (> 100K tokens), the length and complexity of the documents have so far prohibited evaluations of input-dependent aspects like faithfulness. In this paper, we conduct the first large-scale human evaluation of faithfulness and content selection on LLM-generated summaries of fictional books. Our study mitigates the issue of data contamination by focusing on summaries of books published in 2023 or 2024, and we hire annotators who have fully read each book prior to the annotation task to minimize cost and cognitive burden. We collect FABLES, a dataset of annotations on 3,158 claims made in LLM-generated summaries of 26 books, at a cost of $5.2K USD, which allows us to rank LLM summarizers based on faithfulness: CLAUDE-3-OPUS significantly outperforms all closedsource LLMs, while the open-source MIXTRAL is on par with GPT-3.5-TURBO. An analysis of the annotations reveals that most unfaithful claims relate to events and character states, and they generally require indirect reasoning over the narrative to invalidate. While LLM-based auto-raters have proven reliable for factuality and coherence in other settings, we implement several LLM raters of faithfulness and find that none correlates strongly with human annotations, especially with regard to detecting unfaithful claims. Our experiments suggest that detecting unfaithful claims is an important future direction not only for summarization evaluation but also as a testbed for long-context understanding. Finally, we move beyond faithfulness by exploring content selection errors in book-length summarization: we develop a typology of omission errors related to crucial narrative elements and also identify a systematic over-emphasis on events occurring towards the end of the book. We release FABLES to spur further research on the evaluation of book-length summarization.
FABLES: Evaluating faithfulness and content selection in book-length summarization
[ "Yekyung Kim", "Yapei Chang", "Marzena Karpinska", "Aparna Garimella", "Varun Manjunatha", "Kyle Lo", "Tanya Goyal", "Mohit Iyyer" ]
Conference
Poster
2404.01261
[ "https://github.com/mungg/fables" ]
-1
-1
-1
-1
[]
[]
[]
0
135
null
https://openreview.net/forum?id=YX7QnhxESU
@inproceedings{ liang2024mapping, title={Mapping the Increasing Use of {LLM}s in Scientific Papers}, author={Weixin Liang and Yaohui Zhang and Zhengxuan Wu and Haley Lepp and Wenlong Ji and Xuandong Zhao and Hancheng Cao and Sheng Liu and Siyu He and Zhi Huang and Diyi Yang and Christopher Potts and Christopher D Manning and James Y. Zou}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=YX7QnhxESU} }
Scientific publishing lays the foundation of science by disseminating research findings, fostering collaboration, encouraging reproducibility, and ensuring that scientific knowledge is accessible, verifiable, and built upon over time. Recently, there has been immense speculation about how many people are using large language models (LLMs) like ChatGPT in their academic writing, and to what extent this tool might have an effect on global scientific practices. However, we lack a precise measure of the proportion of academic writing substantially modified or produced by LLMs. To address this gap, we conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the $\textit{arXiv}$, $\textit{bioRxiv}$, and $\textit{Nature}$ portfolio journals, using a population-level statistical framework to measure the prevalence of LLM-modified content over time. The statistical framework operates on the population level without the need to perform inference on any individual instance. Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers (up to 17.5\%). In comparison, Mathematics papers and the Nature portfolio showed the least LLM modification (up to 6.3\%). Moreover, at an aggregate level, our analysis reveals that higher levels of LLM-modification are associated with papers whose first authors post preprints more frequently, papers in more crowded areas, and papers with shorter lengths. Our findings suggests that LLMs are being broadly used in scientific papers.
Mapping the Increasing Use of LLMs in Scientific Papers
[ "Weixin Liang", "Yaohui Zhang", "Zhengxuan Wu", "Haley Lepp", "Wenlong Ji", "Xuandong Zhao", "Hancheng Cao", "Sheng Liu", "Siyu He", "Zhi Huang", "Diyi Yang", "Christopher Potts", "Christopher D Manning", "James Y. Zou" ]
Conference
Poster
2404.01268
[ "https://github.com/Weixin-Liang/Mapping-the-Increasing-Use-of-LLMs-in-Scientific-Papers" ]
-1
-1
-1
-1
[]
[]
[]
0
136
null
https://openreview.net/forum?id=YDZ7GeFLxq
@inproceedings{ tan2024scattered, title={Scattered Mixture-of-Experts Implementation}, author={Shawn Tan and Yikang Shen and Rameswar Panda and Aaron Courville}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=YDZ7GeFLxq} }
ScatterMoE is an implementation of Sparse Mixture-of-Experts (SMoE) on GPUs. ScatterMoE builds upon techniques in existing implementations, and overcoming some of the current limitations to improve batched inference, training speed, and memory footprint. This implementation achieves this by avoiding padding and making excessive copies of the input. We also fuse expert linear transforms and reordering operations with ParallelLinear, a module that can be used to extend the concept of SMoEs. We benchmark our implementation against Megablocks, and show that it enables a higher throughput and lower memory footprint. We also show how ParallelLinear enables extension of the Mixture-of-Experts concept by demonstrating with an implementation of Mixture-of-Attention.
Scattered Mixture-of-Experts Implementation
[ "Shawn Tan", "Yikang Shen", "Rameswar Panda", "Aaron Courville" ]
Conference
Poster
2403.08245
[ "https://github.com/shawntan/scattermoe" ]
-1
-1
-1
-1
[]
[]
[]
0
137
null
https://openreview.net/forum?id=Xh1B90iBSR
@inproceedings{ wang2024what, title={What Are Tools Anyway? A Survey from the Language Model Perspective}, author={Zhiruo Wang and Zhoujun Cheng and Hao Zhu and Daniel Fried and Graham Neubig}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Xh1B90iBSR} }
Language models (LMs) are powerful yet mostly for text generation tasks. Tools have substantially enhanced their performance for tasks that require complex skills. However, many works adopt the term “tool” in different ways, raising the question: What is a tool anyway? Subsequently, where and how do tools help LMs? In this survey, we provide a unified definition of tools as external programs used by LMs, and perform a systematic review of LM tooling scenarios and approaches. Grounded on this review, we empirically study the efficiency of various tooling methods by measuring their required compute and performance gains on various benchmarks, and highlight some challenges and potential future research in the field.
What Are Tools Anyway? A Survey from the Language Model Perspective
[ "Zhiruo Wang", "Zhoujun Cheng", "Hao Zhu", "Daniel Fried", "Graham Neubig" ]
Conference
Poster
2403.15452
[ "" ]
https://huggingface.co/papers/2403.15452
0
0
0
5
[]
[]
[]
1
138
null
https://openreview.net/forum?id=XII0Wp1XA9
@inproceedings{ liu2024a, title={A Dynamic {LLM}-Powered Agent Network for Task-Oriented Agent Collaboration}, author={Zijun Liu and Yanzhe Zhang and Peng Li and Yang Liu and Diyi Yang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=XII0Wp1XA9} }
Recent studies show that collaborating multiple large language model (LLM) powered agents is a promising way for task solving. However, current approaches are constrained by using a fixed number of agents and static communication structures. In this work, we propose automatically selecting a team of agents from candidates to collaborate in a dynamic communication structure toward different tasks and domains. Specifically, we build a framework named Dynamic LLM-Powered Agent Network ($\textbf{DyLAN}$) for LLM-powered agent collaboration, operating a two-stage paradigm: (1) Team Optimization and (2) Task Solving. During the first stage, we utilize an agent selection algorithm, based on an unsupervised metric called Agent Importance Score, enabling the selection of best agents according to their contributions in a preliminary trial, oriented to the given task. Then, in the second stage, the selected agents collaborate dynamically according to the query. Empirically, we demonstrate that DyLAN outperforms strong baselines in code generation, decision-making, general reasoning, and arithmetic reasoning tasks with moderate computational cost. On specific subjects in MMLU, selecting a team of agents in the team optimization stage improves accuracy by up to 25.0% in DyLAN.
A Dynamic LLM-Powered Agent Network for Task-Oriented Agent Collaboration
[ "Zijun Liu", "Yanzhe Zhang", "Peng Li", "Yang Liu", "Diyi Yang" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
139
null
https://openreview.net/forum?id=XGJBEeziEb
@inproceedings{ zhang2024data, title={Data Checklist: On Unit-Testing Datasets with Usable Information}, author={Heidi Chenyu Zhang and Shabnam Behzad and Kawin Ethayarajh and Dan Jurafsky}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=XGJBEeziEb} }
Model checklists (Ribeiro et al., 2020) have emerged as a useful tool for understanding the behavior of LLMs, analogous to unit-testing in software engineering. However, despite datasets being a key determinant of model behavior, evaluating datasets -- e.g., for the existence of annotation artifacts -- is largely done ad hoc, once a problem in model behavior has already been found downstream. In this work, we take a more principled approach to unit-testing datasets by proposing a taxonomy based on the $\mathcal{V}$-information literature. We call a collection of such unit tests a data checklist. Using the checklist, not only are we able to recover known artifacts in well-known datasets such as SNLI, but we also discover previously unknown artifacts in preference datasets for LLM alignment. Data checklists further enable a new kind of data filtering, which we use to improve the efficacy and data efficiency of preference alignment.
Data Checklist: On Unit-Testing Datasets with Usable Information
[ "Heidi Chenyu Zhang", "Shabnam Behzad", "Kawin Ethayarajh", "Dan Jurafsky" ]
Conference
Poster
2408.02919
[ "https://github.com/ChenyuHeidiZhang/data_checklist" ]
-1
-1
-1
-1
[]
[]
[]
0
140
null
https://openreview.net/forum?id=X9yV4lFHt4
@inproceedings{ neplenbroek2024mbbq, title={{MBBQ}: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative {LLM}s}, author={Vera Neplenbroek and Arianna Bisazza and Raquel Fern{\'a}ndez}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=X9yV4lFHt4} }
Generative large language models (LLMs) have been shown to exhibit harmful biases and stereotypes. While safety fine-tuning typically takes place in English, if at all, these models are being used by speakers of many different languages. There is existing evidence that the performance of these models is inconsistent across languages and that they discriminate based on demographic factors of the user. Motivated by this, we investigate whether the social stereotypes exhibited by LLMs differ as a function of the language used to prompt them, while controlling for cultural differences and task accuracy. To this end, we present MBBQ (Multilingual Bias Benchmark for Question-answering), a carefully curated version of the English BBQ dataset extended to Dutch, Spanish, and Turkish, which measures stereotypes commonly held across these languages. We further complement MBBQ with a parallel control dataset to measure task performance on the question-answering task independently of bias. Our results based on several open-source and proprietary LLMs confirm that some non-English languages suffer from bias more than English, even when controlling for cultural shifts. Moreover, we observe significant cross-lingual differences in bias behaviour for all except the most accurate models. With the release of MBBQ, we hope to encourage further research on bias in multilingual settings. The dataset and code are available at https://github.com/Veranep/MBBQ.
MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative LLMs
[ "Vera Neplenbroek", "Arianna Bisazza", "Raquel Fernández" ]
Conference
Poster
2406.07243
[ "https://github.com/veranep/mbbq" ]
-1
-1
-1
-1
[]
[]
[]
0
141
null
https://openreview.net/forum?id=X1xNsuKssb
@inproceedings{ wang2024mambabyte, title={MambaByte: Token-free Selective State Space Model}, author={Junxiong Wang and Tushaar Gangavarapu and Jing Nathan Yan and Alexander M Rush}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=X1xNsuKssb} }
Token-free language models learn directly from raw bytes and remove the inductive bias of subword tokenization. Operating on bytes, however, results in significantly longer sequences. In this setting, standard autoregressive Transformers scale poorly as the effective memory required grows with sequence length. The recent development of the Mamba state space model (SSM) offers an appealing alternative approach with a fixed-sized memory state and efficient decoding. We propose MambaByte, a token-free adaptation of the Mamba SSM trained autoregressively on byte sequences. In terms of modeling, we show MambaByte to be competitive with, and even to outperform, state-of-the-art subword Transformers on language modeling tasks while maintaining the benefits of token-free language models, such as robustness to noise. In terms of efficiency, we develop an adaptation of speculative decoding with tokenized drafting and byte-level verification. This results in a $2.6\times$ inference speedup to the standard MambaByte implementation, showing similar decoding efficiency as the subword Mamba. These findings establish the viability of SSMs in enabling token-free language modeling.
MambaByte: Token-free Selective State Space Model
[ "Junxiong Wang", "Tushaar Gangavarapu", "Jing Nathan Yan", "Alexander M Rush" ]
Conference
Poster
2401.13660
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
142
null
https://openreview.net/forum?id=W8Rv1jVycX
@inproceedings{ ravfogel2024descriptionbased, title={Description-Based Text Similarity}, author={Shauli Ravfogel and Valentina Pyatkin and Amir David Nissan Cohen and Avshalom Manevich and Yoav Goldberg}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=W8Rv1jVycX} }
Identifying texts with a given semantics is central for many information seeking scenarios. Similarity search over vector embeddings appear to be central to this ability, yet the similarity reflected in current text embeddings is corpus-driven, and is inconsistent and sub-optimal for many use cases. What, then, is a good notion of similarity for effective retrieval of text? We identify the need to search for texts based on abstract descriptions of their content, and the corresponding notion of \emph{description based similarity}. We demonstrate the inadequacy of current text embeddings and propose an alternative model that significantly improves when used in standard nearest neighbor search. The model is trained using positive and negative pairs sourced through prompting a LLM, demonstrating how data from LLMs can be used for creating new capabilities not immediately possible using the original model.
Description-Based Text Similarity
[ "Shauli Ravfogel", "Valentina Pyatkin", "Amir David Nissan Cohen", "Avshalom Manevich", "Yoav Goldberg" ]
Conference
Poster
2305.12517
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
143
null
https://openreview.net/forum?id=Vd0KvChLXr
@inproceedings{ guo2024generating, title={Generating Synthetic Datasets for Few-shot Prompt Tuning}, author={Xu Guo and Zilin Du and Boyang Li and Chunyan Miao}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Vd0KvChLXr} }
A major limitation of prompt tuning is its dependence on large labeled training datasets. Under few-shot learning settings, prompt tuning lags far behind full-model fine-tuning, limiting its scope of application. In this paper, we leverage the powerful LLMs to synthesize task-specific labeled data for training the soft prompts. We first introduce a distribution-aligned weighted generator tuning (DawGen) method to encourage generating in-distribution data that aligns with the few-shot real data. Then, we train soft prompts on both synthetic and real datasets using a gradient surgery approach, which eliminates the conflicting gradients from different data sources. Experiments on seven sentence-pair classification datasets demonstrate the effectiveness of our proposed method for boosting prompt tuning in few-shot learning settings. Results on QQP, MRPC, and SICK datasets are even comparable to the performance of transfer learning from large real-world datasets, showing the promise of synthetic data as an alternative for enhancing soft prompt tuning.
Generating Synthetic Datasets for Few-shot Prompt Tuning
[ "Xu Guo", "Zilin Du", "Boyang Li", "Chunyan Miao" ]
Conference
Poster
2410.10865
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
144
null
https://openreview.net/forum?id=VWWzO3ewMS
@inproceedings{ khurana2024crowdcalibrator, title={Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?}, author={Urja Khurana and Eric Nalisnick and Antske Fokkens and Swabha Swayamdipta}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=VWWzO3ewMS} }
Subjective tasks in NLP have been mostly relegated to objective standards, where the gold label is decided by taking the majority vote. This obfuscates annotator disagreement and the inherent uncertainty of the label. We argue that subjectivity should factor into model decisions and play a direct role via calibration under a selective prediction setting. Specifically, instead of calibrating confidence purely from the model’s perspective, we calibrate models for subjective tasks based on crowd worker agreement. Our method, Crowd-Calibrator, models the distance between the distribution of crowd worker labels and the model’s own distribution over labels to inform whether the model should abstain from a decision. On two highly subjective tasks, hate speech detection and natural language inference, our experiments show Crowd-Calibrator either outperforms or achieves competitive performance with existing selective prediction baselines. Our findings highlight the value of bringing human decision-making into model predictions.
Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
[ "Urja Khurana", "Eric Nalisnick", "Antske Fokkens", "Swabha Swayamdipta" ]
Conference
Poster
2408.14141
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
145
null
https://openreview.net/forum?id=VHhwhmtx3b
@inproceedings{ bai2024does, title={Does Ro{BERT}a Perform Better than {BERT} in Continual Learning: An Attention Sink Perspective}, author={Xueying Bai and Yifan Sun and Niranjan Balasubramanian}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=VHhwhmtx3b} }
Continual learning (CL) aims to train models that can sequentially learn new tasks without forgetting previous tasks' knowledge. Although previous works observed that pre-training can benefit CL, it remains unclear whether a pre-trained model with higher downstream capacity also performs better in CL. In this paper, we observe that pre-trained models may allocate high attention scores to some 'sink' tokens, such as [SEP] tokens, which are ubiquitous across various tasks. Such attention sinks may lead to models' over-smoothing in single-task learning and interference in sequential tasks’ learning, which may compromise the models' CL performance despite their high pre-trained capabilities. To reduce these effects, we propose a pre-scaling mechanism that encourages attention diversity across all tokens. Specifically, it first scales the task's attention to the non-sink tokens in a probing stage, and then fine-tunes the model with scaling. Experiments show that pre-scaling yields substantial improvements in CL without experience replay, or progressively storing parameters from previous tasks.
Does RoBERTa Perform Better than BERT in Continual Learning: An Attention Sink Perspective
[ "Xueying Bai", "Yifan Sun", "Niranjan Balasubramanian" ]
Conference
Poster
2410.05648
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
146
null
https://openreview.net/forum?id=V7HRrxXUhN
@inproceedings{ thakur2024an, title={An In-Context Learning Agent for Formal Theorem-Proving}, author={Amitayush Thakur and George Tsoukalas and Yeming Wen and Jimmy Xin and Swarat Chaudhuri}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=V7HRrxXUhN} }
We present an in-context learning agent for formal theorem-proving in environments like Lean and Coq. Current state-of-the-art models for the problem are finetuned on environment-specific proof data. By contrast, our approach, called COPRA, repeatedly asks a high-capacity, general-purpose large language model (GPT-4) to propose tactic applications from within a stateful backtracking search. Proposed tactics are executed in the underlying proof environment. Feedback from the execution is used to build the prompt for the next model query, along with selected information from the search history and lemmas retrieved from an external database. We evaluate our implementation of COPRA on the miniF2F benchmark for Lean and a set of Coq tasks from the CompCert project. On these benchmarks, COPRA significantly outperforms few-shot invocations of GPT-4. It also compares favorably against finetuning-based approaches, outperforming REPROVER, a state-of-the-art finetuned approach for Lean, in terms of the pass@1 metric. Our code and data are available at https://github.com/trishullab/copra
An In-Context Learning Agent for Formal Theorem-Proving
[ "Amitayush Thakur", "George Tsoukalas", "Yeming Wen", "Jimmy Xin", "Swarat Chaudhuri" ]
Conference
Poster
2310.04353
[ "https://github.com/trishullab/copra" ]
-1
-1
-1
-1
[]
[]
[]
0
147
null
https://openreview.net/forum?id=UyNIH6CWHH
@inproceedings{ hagemann2024efficient, title={Efficient Parallelization Layouts for Large-Scale Distributed Model Training}, author={Johannes Hagemann and Samuel Weinbach and Konstantin Dobler and Maximilian Schall and Gerard de Melo}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=UyNIH6CWHH} }
Efficiently training large language models requires parallelizing across hundreds of hardware accelerators and invoking various compute and memory optimizations. When combined, many of these strategies have complex interactions regarding the final training efficiency. Prior work tackling this problem did not have access to the latest set of optimizations, such as FlashAttention or sequence parallelism. In this work, we conduct a comprehensive ablation study of possible training configurations for large language models. We distill this large study into several key recommendations for the most efficient training. For instance, we find that using a micro-batch size of 1 usually enables the most efficient training layouts. Larger micro-batch sizes necessitate activation checkpointing or higher degrees of model parallelism and also lead to larger pipeline bubbles. Our most efficient configurations enable us to achieve state-of-the-art training efficiency results over a range of model sizes, most notably a Model FLOPs utilization of 70.5% when training a LLaMA 13B model.
Efficient Parallelization Layouts for Large-Scale Distributed Model Training
[ "Johannes Hagemann", "Samuel Weinbach", "Konstantin Dobler", "Maximilian Schall", "Gerard de Melo" ]
Conference
Poster
2311.05610
[ "https://github.com/aleph-alpha/neurips-want-submission-efficient-parallelization-layouts" ]
https://huggingface.co/papers/2311.05610
1
0
0
5
[]
[]
[]
1
148
null
https://openreview.net/forum?id=Ukf4301hXm
@inproceedings{ zhang2024unforgettable, title={Unforgettable Generalization in Language Models}, author={Eric Zhang and Leshem Choshen and Jacob Andreas}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Ukf4301hXm} }
When language models (LMs) are trained to ``unlearn'' a skill, does this unlearning generalize? We study the behavior of LMs after fine-tuned on data for a target task (e.g. sentiment analysis) in which the labels have been randomized, a popular unlearning method. While LMs consistently learn to generate near-random predictions for individual training examples in the unlearning set, there is extreme variability across tasks in whether LM predictions change on examples outside the unlearning set. In some tasks (like sentiment analysis), unlearning generalizes robustly, and causes models to generate random outputs on all sentiment-type inputs; in other tasks (like physical commonsense reasoning and scientific question answering) unlearning produces almost no generalization at all, and models continue to perform the task accurately even for examples very similar to those that appeared in the training set. Across tasks, we find that dataset difficulty is not predictive of whether a behavior can be unlearned; instead, generalization in unlearning is (weakly) predicted by the confidence of LMs' initial task predictions and the variability of LM representations of unlearning data, with low confidence and low variability both associated with greater generalization. Finally, we show that even generalizable unlearning is shallow: linear probes trained on LMs' representations can still perform tasks reliably after unlearning. Our results highlight the difficulty and unpredictability of performing targeted skill removal from models via fine-tuning.
Unforgettable Generalization in Language Models
[ "Eric Zhang", "Leshem Choshen", "Jacob Andreas" ]
Conference
Poster
2409.02228
[ "" ]
https://huggingface.co/papers/2409.02228
0
0
0
3
[]
[]
[]
1
149
null
https://openreview.net/forum?id=Uhwze2LEwq
@inproceedings{ dingjie2024milebench, title={MileBench: Benchmarking {MLLM}s in Long Context}, author={Song Dingjie and Shunian Chen and Guiming Hardy Chen and Fei Yu and Xiang Wan and Benyou Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Uhwze2LEwq} }
Despite the rapid progression of Multimodal Large Language Models (MLLMs) and their impressive performance on various benchmarks, the applicability of these results to real-world tasks remains uncertain. This ambiguity primarily stems from the benchmarks' limited consideration for long-context and multi-image tasks, which are critical elements in real-world applications. Existing benchmarks often focus on single-image and short-text samples, and when assessing multi-image tasks, they either limit the image count or focus on time-series captioning tasks, potentially masking MLLMs' performance challenges such as hallucination in long-context situations. To address these limitations, we introduce \textbf{\dataset}, a pioneering benchmark designed to rigorously test the \textbf{M}ult\textbf{I}modal \textbf{L}ong-cont\textbf{E}xt capabilities of MLLMs. This benchmark comprises a mix of text and images, long contexts, multiple tasks, and tasks requiring both comprehension and generation. We establish two distinct evaluation sets, diagnostic and realistic, to systematically assess MLLMs' long-context adaptation capacity and their ability to complete tasks in long-context scenarios. Our experimental results, garnered from testing 19 models, revealed that while closed-source model GPT-4(Vision) outperforms others, most open-source MLLMs display inadequate performance in long-context situations. Hence, we strongly encourage an intensification of research efforts towards enhancing MLLMs' long-context capabilities, especially in scenarios involving multiple images.
MileBench: Benchmarking MLLMs in Long Context
[ "Song Dingjie", "Shunian Chen", "Guiming Hardy Chen", "Fei Yu", "Xiang Wan", "Benyou Wang" ]
Conference
Poster
2404.18532
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
150
null
https://openreview.net/forum?id=UfqzXg95I5
@inproceedings{ liao2024amplegcg, title={Ample{GCG}: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed {LLM}s}, author={Zeyi Liao and Huan Sun}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=UfqzXg95I5} }
\begin{center} \textcolor{red}{Warning: This paper contains potentially offensive and harmful text.}\end{center} As large language models (LLMs) become increasingly prevalent and integrated into autonomous systems, ensuring their safety is imperative. Despite significant strides toward safety alignment, recent work GCG~\citep{zou2023universal} proposes a discrete tokens optimization algorithm and selects the single suffix with the lowest loss to successfully jailbreak aligned LLMs. In this work, we first discuss the drawbacks of solely picking the suffix with the lowest loss during GCG optimization for jailbreaking and uncover the missed successful suffixes during the intermediate steps. Moreover, we utilize those successful suffixes as training data to learn a generative model, named AmpleGCG, which captures the distribution of adversarial suffixes given a harmful query and enables the rapid generation of hundreds of suffixes for any harmful queries in seconds. AmpleGCG achieves near 100\% attack success rate (ASR) on two aligned LLMs (Llama-2-7B-chat and Vicuna-7B), surpassing two strongest attack baselines. More interestingly, AmpleGCG also transfers seamlessly to attack different models, including closed-source LLMs, achieving a 99\% ASR on the latest GPT-3.5. To summarize, our work amplifies the impact of GCG by training a generative model of adversarial suffixes that is universal to any harmful queries and transferable from attacking open-source LLMs to closed-source LLMs. Impressively, it can generate 200 adversarial suffixes for one harmful query in only 4 seconds, rendering it more challenging to defend.
AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs
[ "Zeyi Liao", "Huan Sun" ]
Conference
Poster
2404.07921
[ "https://github.com/osu-nlp-group/amplegcg" ]
https://huggingface.co/papers/2404.07921
0
1
0
2
[ "osunlp/AmpleGCG-llama2-sourced-llama2-7b-chat", "osunlp/AmpleGCG-llama2-sourced-vicuna-7b", "osunlp/AmpleGCG-llama2-sourced-vicuna-7b13b-guanaco-7b13b", "osunlp/AmpleGCG-plus-llama2-sourced-llama2-7b-chat", "osunlp/AmpleGCG-plus-llama2-sourced-vicuna-7b13b-guanaco-7b13b" ]
[]
[]
1
151
null
https://openreview.net/forum?id=UfWwBaLuXV
@inproceedings{ yan2024list, title={List Items One by One: A New Data Source and Learning Paradigm for Multimodal {LLM}s}, author={An Yan and Zhengyuan Yang and Junda Wu and Wanrong Zhu and Jianwei Yang and Linjie Li and Kevin Lin and Jianfeng Wang and Julian McAuley and Jianfeng Gao and Lijuan Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=UfWwBaLuXV} }
Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of GPT-4V, by enabling the model to associate visual objects with tags inserted on the image. These tags, marked with alphanumerics, can be indexed via text tokens for easy reference. Despite the extraordinary performance from GPT-4V, we observe that other Multimodal Large Language Models (MLLMs) struggle to understand these visual tags. To promote the learning of SoM prompting for open-source models, we propose a new learning paradigm: list items one by one, which asks the model to enumerate and describe all visual tags placed on the image following the alphanumeric order of tags. By integrating our synthetic dataset with other visual instruction tuning datasets, we are able to equip existing MLLMs with the SoM prompting ability. Furthermore, we evaluate our finetuned SoM models on seven MLLM benchmarks. We find that this new dataset, even in a relatively small size (10k-30k images with tags), significantly enhances visual reasoning capabilities and reduces hallucinations for MLLMs. Perhaps surprisingly, these improvements persist even when the visual tags are omitted from input images during inference. This suggests the potential of ``list items one by one'' as a new paradigm for training MLLMs, which strengthens the object-text alignment through the use of visual tags in the training stage. Finally, we conduct analyses by probing trained models to understand the working mechanism of SoM. Our code and data are available at https://github.com/zzxslp/SoM-LLaVA.
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs
[ "An Yan", "Zhengyuan Yang", "Junda Wu", "Wanrong Zhu", "Jianwei Yang", "Linjie Li", "Kevin Lin", "Jianfeng Wang", "Julian McAuley", "Jianfeng Gao", "Lijuan Wang" ]
Conference
Poster
2404.16375
[ "https://github.com/zzxslp/som-llava" ]
https://huggingface.co/papers/2404.16375
9
16
2
11
[ "zzxslp/som-llava-v1.5-13b", "zzxslp/som-llava-v1.5-13b-hf" ]
[]
[]
1
152
null
https://openreview.net/forum?id=UPyWLwciYz
@inproceedings{ khalifa2024sourceaware, title={Source-Aware Training Enables Knowledge Attribution in Language Models}, author={Muhammad Khalifa and David Wadden and Emma Strubell and Honglak Lee and Lu Wang and Iz Beltagy and Hao Peng}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=UPyWLwciYz} }
Large language models (LLMs) learn a vast amount of knowledge during pretraining, but they are often oblivious to the source(s) of such knowledge. We investigate the problem of intrinsic source citation, where LLMs are required to cite the pretraining source supporting a generated response. Intrinsic source citation can enhance LLM transparency, interpretability, and verifiability. To give LLMs such ability, we explore source-aware training---a recipe that involves (i) training the LLM to associate unique source document identifiers with the knowledge in each document, followed by (ii) an instruction-tuning stage to teach the LLM to cite a supporting pretraining source when prompted. Source-aware training borrows from existing pretraining/fine-tuning frameworks and requires minimal changes to the model architecture or implementation. Through experiments on synthetic data, we demonstrate that our training recipe can enable faithful attribution to the pretraining data without a substantial impact on the model's perplexity compared to standard pretraining. Our findings also highlight the importance of pretraining data augmentation in achieving attribution.
Source-Aware Training Enables Knowledge Attribution in Language Models
[ "Muhammad Khalifa", "David Wadden", "Emma Strubell", "Honglak Lee", "Lu Wang", "Iz Beltagy", "Hao Peng" ]
Conference
Poster
2404.01019
[ "https://github.com/mukhal/intrinsic-source-citation" ]
-1
-1
-1
-1
[]
[]
[]
0
153
null
https://openreview.net/forum?id=UPE6WYE8vg
@inproceedings{ mao2024a, title={A Language Agent for Autonomous Driving}, author={Jiageng Mao and Junjie Ye and Yuxi Qian and Marco Pavone and Yue Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=UPE6WYE8vg} }
Human-level driving is an ultimate goal of autonomous driving. Conventional approaches formulate autonomous driving as a perception-prediction-planning framework, yet their systems do not capitalize on the inherent reasoning ability and experiential knowledge of humans. In this paper, we propose a fundamental paradigm shift from current pipelines, exploiting Large Language Models (LLMs) as a cognitive agent to integrate human-like intelligence into autonomous driving systems. Our system, termed Agent-Driver, transforms the traditional autonomous driving pipeline by introducing a versatile tool library accessible via function calls, a cognitive memory of common sense and experiential knowledge for decision-making, and a reasoning engine capable of chain-of-thought reasoning, task planning, motion planning, and self-reflection. Powered by LLMs, our Agent-Driver is endowed with intuitive common sense and robust reasoning capabilities, thus enabling a more nuanced, human-like approach to autonomous driving. We evaluate our system on both open-loop and close-loop driving challenges, and extensive experiments substantiate that our Agent-Driver significantly outperforms the state-of-the-art driving methods by a large margin. Our approach also demonstrates superior interpretability and few-shot learning ability to these methods.
A Language Agent for Autonomous Driving
[ "Jiageng Mao", "Junjie Ye", "Yuxi Qian", "Marco Pavone", "Yue Wang" ]
Conference
Poster
2311.10813
[ "https://github.com/usc-gvl/agent-driver" ]
https://huggingface.co/papers/2311.10813
0
0
0
5
[]
[]
[]
1
154
null
https://openreview.net/forum?id=U5BUzSn4tD
@inproceedings{ hu2024auxiliary, title={Auxiliary task demands mask the capabilities of smaller language models}, author={Jennifer Hu and Michael Frank}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=U5BUzSn4tD} }
Developmental psychologists have argued about when cognitive capacities such as language understanding or theory of mind emerge. These debates often hinge on the concept of "task demands" -- the auxiliary challenges associated with performing a particular evaluation -- that may mask the child’s underlying ability. The same issues arise when measuring the capacities of language models (LMs): performance on a task is a function of the model's underlying knowledge, combined with the model’s ability to interpret and perform the task given its available resources. Here, we show that for analogical reasoning, reflective reasoning, word prediction, and grammaticality judgments, evaluation methods with greater task demands yield lower performance than evaluations with reduced demands. This "demand gap" is most pronounced for models with fewer parameters and less training data. Our results illustrate that LM performance should not be interpreted as a direct indication of intelligence (or lack thereof), but as a reflection of capacities seen through the lens of researchers' design choices.
Auxiliary task demands mask the capabilities of smaller language models
[ "Jennifer Hu", "Michael Frank" ]
Conference
Oral
2404.02418
[ "https://github.com/jennhu/lm-task-demands" ]
-1
-1
-1
-1
[]
[]
[]
0
155
null
https://openreview.net/forum?id=TrloAXEJ2B
@inproceedings{ huang2024lorahub, title={LoraHub: Efficient Cross-Task Generalization via Dynamic Lo{RA} Composition}, author={Chengsong Huang and Qian Liu and Bill Yuchen Lin and Tianyu Pang and Chao Du and Min Lin}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=TrloAXEJ2B} }
Low-rank adaptation (LoRA) is often employed to fine-tune large language models (LLMs) for new tasks. This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a simple framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks. With just a few examples from a new task, LoraHub can fluidly combine multiple LoRA modules, eliminating the need for human expertise and assumptions. Notably, the composition requires neither additional model parameters nor gradients. Empirical results on the Big-Bench Hard benchmark suggest that LoraHub, while not surpassing the performance of in-context learning, offers a notable performance-efficiency trade-off in few-shot scenarios by employing a significantly reduced number of tokens per example during inference. Notably, LoraHub establishes a better upper bound compared to in-context learning when paired with different demonstration examples, demonstrating its potential for future development. Our vision is to establish a platform for LoRA modules, empowering users to share their trained LoRA modules. This collaborative approach facilitates the seamless application of LoRA modules to novel tasks, contributing to an adaptive ecosystem.
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
[ "Chengsong Huang", "Qian Liu", "Bill Yuchen Lin", "Tianyu Pang", "Chao Du", "Min Lin" ]
Conference
Poster
2307.13269
[ "https://github.com/sail-sg/lorahub" ]
https://huggingface.co/papers/2307.13269
6
31
2
6
[]
[]
[ "sail/lorahub" ]
1
156
null
https://openreview.net/forum?id=Ti67584b98
@inproceedings{ rein2024gpqa, title={{GPQA}: A Graduate-Level Google-Proof Q\&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Ti67584b98} }
We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65\% accuracy (74\% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34\% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are "Google-proof"). When we released this dataset in November 2023, GPT-4 achieved 39\% accuracy. As of March 2024, Claude 3 Opus achieves a reported score of approximately 60\%, highlighting the rapid pace of progress in AI. If we are to use future AI systems to help us answer very hard questions—for example, when developing new scientific knowledge—we need to develop scalable oversight methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA for skilled non-experts should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities.
GPQA: A Graduate-Level Google-Proof Q A Benchmark
[ "David Rein", "Betty Li Hou", "Asa Cooper Stickland", "Jackson Petty", "Richard Yuanzhe Pang", "Julien Dirani", "Julian Michael", "Samuel R. Bowman" ]
Conference
Oral
[ "https://github.com/idavidrein/gpqa" ]
-1
-1
-1
-1
[]
[]
[]
0
157
null
https://openreview.net/forum?id=TZ0CCGDcuT
@inproceedings{ hanna2024have, title={Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms}, author={Michael Hanna and Sandro Pezzelle and Yonatan Belinkov}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=TZ0CCGDcuT} }
Many recent language model (LM) interpretability studies have adopted the circuits framework, which aims to find the minimal computational subgraph, or circuit, that explains LM behavior on a given task. Most studies determine which edges belong in a LM's circuit for a task by performing causal interventions on each edge independently, but this scales poorly with model size. As a solution, recent work has proposed edge attribution patching (EAP), a scalable gradient-based approximation to interventions. In this paper, we introduce a new method - EAP with integrated gradients (EAP-IG) - that aims to efficiently find circuits while better maintaining one of their core properties: faithfulness. A circuit is faithful if all model edges outside the circuit can be ablated without changing the model's behavior on the task; faithfulness is what justifies studying circuits, rather than the full model. Our experiments demonstrate that circuits found using EAP-IG are more faithful than those found using EAP, even though both have high node overlap with reference circuits found using causal interventions. We conclude more generally that when comparing circuits, measuring overlap is no substitute for measuring faithfulness.
Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms
[ "Michael Hanna", "Sandro Pezzelle", "Yonatan Belinkov" ]
Conference
Poster
2403.17806
[ "https://github.com/hannamw/eap-ig" ]
https://huggingface.co/papers/2403.17806
1
3
0
3
[]
[]
[]
1
158
null
https://openreview.net/forum?id=TRxQMpLUfD
@inproceedings{ yauney2024stronger, title={Stronger Random Baselines for In-Context Learning}, author={Gregory Yauney and David Mimno}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=TRxQMpLUfD} }
Evaluating the in-context learning classification performance of language models poses challenges due to small dataset sizes, extensive prompt-selection using the validation set, and intentionally difficult tasks that lead to near-random performance. The standard random baseline--the expected accuracy of guessing labels uniformly at random--is stable when the evaluation set is used only once or when the dataset is large. We account for the common practice of validation set reuse and existing small datasets with a stronger random baseline: the expected maximum accuracy across multiple random classifiers. When choosing the best prompt demonstrations across six quantized language models applied to 16 BIG-bench Lite tasks, more than 20% of the few-shot results that exceed the standard baseline do not exceed this stronger random baseline. When held-out test sets are available, this stronger baseline is also a better predictor of held-out performance than the standard baseline, avoiding unnecessary test set evaluations. This maximum random baseline provides an easily calculated drop-in replacement for the standard baseline.
Stronger Random Baselines for In-Context Learning
[ "Gregory Yauney", "David Mimno" ]
Conference
Poster
2404.13020
[ "https://github.com/gyauney/max-random-baseline" ]
-1
-1
-1
-1
[]
[]
[]
0
159
null
https://openreview.net/forum?id=TQdd1VhWbe
@inproceedings{ fujii2024continual, title={Continual Pre-Training for Cross-Lingual {LLM} Adaptation: Enhancing Japanese Language Capabilities}, author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=TQdd1VhWbe} }
Cross-lingual continual pre-training of large language models (LLMs) initially trained on English corpus allows us to leverage the vast amount of English language resources and reduce the pre-training cost. In this study, we constructed Swallow, an LLM with enhanced Japanese capability, by extending the vocabulary of Llama 2 to include Japanese characters and conducting continual pre-training on a large Japanese web corpus. Experimental results confirmed that the performance on Japanese tasks drastically improved through continual pre-training, and the performance monotonically increased with the amount of training data up to 100B tokens. Consequently, Swallow achieved superior performance compared to other LLMs that were trained from scratch in English and Japanese. An analysis of the effects of continual pre-training revealed that it was particularly effective for Japanese question answering tasks. Furthermore, to elucidate effective methodologies for cross-lingual continual pre-training from English to Japanese, we investigated the impact of vocabulary expansion and the effectiveness of incorporating parallel corpora. The results showed that the efficiency gained through vocabulary expansion had no negative impact on performance, except for the summarization task, and that the combined use of parallel corpora enhanced translation ability.
Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities
[ "Kazuki Fujii", "Taishi Nakamura", "Mengsay Loem", "Hiroki Iida", "Masanari Ohi", "Kakeru Hattori", "Hirai Shota", "Sakae Mizuki", "Rio Yokota", "Naoaki Okazaki" ]
Conference
Poster
2404.17790
[ "" ]
https://huggingface.co/papers/2404.17790
2
5
0
10
[ "tokyotech-llm/Swallow-7b-instruct-hf", "tokyotech-llm/Swallow-70b-instruct-hf", "tokyotech-llm/Swallow-13b-instruct-hf", "tokyotech-llm/Swallow-7b-hf", "tokyotech-llm/Swallow-13b-hf", "tokyotech-llm/Swallow-7b-plus-hf", "tokyotech-llm/Swallow-70b-hf", "tokyotech-llm/Swallow-7b-NVE-instruct-hf", "tokyotech-llm/Swallow-70b-NVE-instruct-hf", "tokyotech-llm/Swallow-7b-NVE-hf", "tokyotech-llm/Swallow-70b-NVE-hf", "tokyotech-llm/Swallow-13b-NVE-hf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-4bits", "RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-8bits", "RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-4bits", "RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-8bits", "RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-70b-NVE-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-70b-instruct-v0.1-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-v0.1-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-hf-gguf" ]
[]
[ "hayas/Swallow-13B-instruct", "mmnga/vocabviewer", "kmero/tokyotech-llm-Swallow-70b-instruct-hf", "isonuma/marutenbo", "Huaibo/tokyotech-llm-Swallow-7b-instruct-hf" ]
1
160
null
https://openreview.net/forum?id=TBNYjdOazs
@inproceedings{ kim2024decoupling, title={Decoupling Noise and Toxic Parameters for Language Model Detoxification by Task Vector Merging}, author={Yongmin Kim and Takeshi Kojima and Yusuke Iwasawa and Yutaka Matsuo}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=TBNYjdOazs} }
The goal of detoxifying language models is to reduce the chances of producing offensive or harmful output in pre-trained language models (PLMs), ensuring their safer use. A recently proposed detoxification method utilizes the task vector obtained by subtraction from the fine-tuned model on toxic datasets to the pre-trained model. This approach has shown effectiveness for detoxification but still suffers from degradation. This study focuses on further mitigating degradation while maintaining detoxification performance. To mitigate the degradation, we propose a method that detoxifies the PLMs by fine-tuning multiple models on split toxic datasets and by merging the subtracted task vectors. We conducted experiments on two toxic datasets (Civil Comments and Toxigen) with five PLMs (GPT2-small, GPT2-medium, GPT2-large, Phi-1.5, and Llama2-7b), demonstrating that our method consistently achieves a lower toxicity score while preventing the degradation compared to baseline methods. Especially, with the GPT2-small model on the Toxigen dataset, degradation was reduced by 38.9\% compared to that of an existing task vector method while maintaining a similar toxicity score. In addition, we found that merging multiple detoxified models tends to increase the number of parameters that remained almost unchanged from the pre-trained model. We assume that by merging multiple detoxified models, "decoupling noise and toxic parameters" is implicitly achieved. The accidental noise in the parameter shift unrelated to detoxification disappears by averaging noise, whereas the parameter shift associated with detoxification is maintained. We hope that the findings of this study will be applied not only to detoxification but also to many other research domains that seek to suppress undesirable outputs of language models.
Decoupling Noise and Toxic Parameters for Language Model Detoxification by Task Vector Merging
[ "Yongmin Kim", "Takeshi Kojima", "Yusuke Iwasawa", "Yutaka Matsuo" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
161
null
https://openreview.net/forum?id=T9cOYH0wGF
@inproceedings{ ram{\'\i}rez2024optimising, title={Optimising Calls to Large Language Models with Uncertainty-Based Two-Tier Selection}, author={Guillem Ram{\'\i}rez and Alexandra Birch and Ivan Titov}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=T9cOYH0wGF} }
Researchers and practitioners operating on a limited budget face the well-known cost-performance trade-off dilemma. The challenging decision often centers on whether to use a large LLM with better performance or a smaller one with reduced costs. This has motivated recent research in the optimisation of LLM calls. Either a cascading strategy is used, where a smaller LLM or both are called causally, or a routing strategy is used, where only one model is ever called. This is dependent on a decision criterion in both scenarios which is typically an auxiliary neural model. In this work, we propose a cost-effective solution; we use only the uncertainty of the generations of the small LLM as the decision criterion. We compare our approach with both cascading and routing strategies using three different pairs of pre-trained small and large LLMs, on nine different tasks and against approaches that require an additional neural model. Our experiments reveal this simple solution optimally balances cost and performance, outperforming existing methods on 25 out of 27 experimental setups.
Optimising Calls to Large Language Models with Uncertainty-Based Two-Tier Selection
[ "Guillem Ramírez", "Alexandra Birch", "Ivan Titov" ]
Conference
Poster
2405.02134
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
162
null
https://openreview.net/forum?id=T5pGDydMkS
@inproceedings{ ou2024adaptive, title={Adaptive Quantization Error Reconstruction for {LLM}s with Mixed Precision}, author={Lin Ou and Jinpeng Xia and Yuewei Zhang and Chuzhan Hao and Hao Henry Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=T5pGDydMkS} }
Large language models (LLMs) has demonstrated superior performance on various downstream tasks. However, their practical applications are hindered by their immense memory and computation requirements. Although recent post-training quantization methods can effectively reduce memory usage and improve computational efficiency, they often overlook the varying sensitivity of different layer weights to bit precision. Additionally, the previous methods suffer from significant accuracy loss under low-bit quantization (2-3 bits). To address these limitations, we propose Adaptive Mixed Precision and Low-Rank Quantization Error Reconstruction for LLMs (AMLQ), which achieves state-of-the-art performance under the approximate average bit precision overall. Furthermore, we introduce the low-rank decomposition to reconstruct quantization error based on the output features. Experimental results demonstrate that this method can be effectively combined with various quantization techniques and bring considerable performance gains. Our approach comprehensively considers model performance and inference efficiency, offering more than 3$\times$ speedup over the FP16 execution.
Adaptive Quantization Error Reconstruction for LLMs with Mixed Precision
[ "Lin Ou", "Jinpeng Xia", "Yuewei Zhang", "Chuzhan Hao", "Hao Henry Wang" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
163
null
https://openreview.net/forum?id=Szp33itD10
@inproceedings{ li2024styletalker, title={StyleTalker: Finetuning Audio Language Model and Style-Based Text-to-Speech Model for Fast Spoken Dialogue Generation}, author={Yinghao Aaron Li and Xilin Jiang and Jordan Darefsky and Ge Zhu and Nima Mesgarani}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Szp33itD10} }
The rapid advancement of large language models (LLMs) has significantly propelled the development of text-based chatbots, demonstrating their capability to engage in coherent and contextually relevant dialogues. However, extending these advancements to enable end-to-end speech-to-speech conversation bots remains a formidable challenge, primarily due to the extensive dataset and computational resources required. The conventional approach of cascading automatic speech recognition (ASR), LLM, and text-to-speech (TTS) models in a pipeline, while effective, suffers from unnatural prosody because it lacks direct interactions between the input audio and its transcribed text and the output audio. These systems are also limited by their inherent latency from the ASR process for real-time applications. This paper introduces Style-Talker, an innovative framework that fine-tunes an audio LLM alongside a style-based TTS model for fast spoken dialog generation. Style-Talker takes user input audio and uses transcribed chat history and speech styles to generate both the speaking style and text for the response. Subsequently, the TTS model synthesizes the speech, which is then played back to the user. While the response speech is being played, the input speech undergoes ASR processing to extract the transcription and speaking style, serving as the context for the ensuing dialogue turn. This novel pipeline accelerates the traditional cascade ASR-LLM-TTS systems while integrating rich paralinguistic information from input speech. Our experimental results show that Style-Talker significantly outperforms the conventional cascade and speech-to-speech baselines in terms of both dialogue naturalness and coherence while being more than 50\% faster. The demo and code are available at https://styletalker.github.io/.
StyleTalker: Finetuning Audio Language Model and Style-Based Text-to-Speech Model for Fast Spoken Dialogue Generation
[ "Yinghao Aaron Li", "Xilin Jiang", "Jordan Darefsky", "Ge Zhu", "Nima Mesgarani" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
164
null
https://openreview.net/forum?id=SwUsFTtM9h
@inproceedings{ naseh2024iteratively, title={Iteratively Prompting Multimodal {LLM}s to Reproduce Natural and {AI}-Generated Images}, author={Ali Naseh and Katherine Thai and Mohit Iyyer and Amir Houmansadr}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=SwUsFTtM9h} }
With the digital imagery landscape rapidly evolving, image stocks and AI-generated image marketplaces have become central to visual media. Traditional stock images now exist alongside innovative platforms that trade in prompts for AI-generated visuals, driven by sophisticated APIs like DALL-E 3 and Midjourney. This paper studies the possibility of employing multi-modal models with enhanced visual understanding to mimic the outputs of these platforms, introducing an original attack strategy. Our method leverages fine-tuned CLIP models, a multi-label classifier, and the descriptive capabilities of GPT-4V to create prompts that generate images similar to those available in marketplaces and from premium stock image providers, yet at a markedly lower expense. In presenting this strategy, we aim to spotlight a new class of economic and security considerations within the realm of digital imagery. Our findings, supported by both automated metrics and human assessment, reveal that comparable visual content can be produced for a fraction of the prevailing market prices (\$0.23 - \$0.27 per image), emphasizing the need for awareness and strategic discussions about the integrity of digital media in an increasingly AI-integrated landscape. Additionally, this approach holds promise as a tool for data augmentation, potentially enhancing machine learning models by providing varied and cost-effective training data. Our work also contributes to the field by assembling a dataset consisting of approximately 19 million prompt-image pairs generated by the popular Midjourney platform, which we plan to release publicly.
Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images
[ "Ali Naseh", "Katherine Thai", "Mohit Iyyer", "Amir Houmansadr" ]
Conference
Oral
2404.13784
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
165
null
https://openreview.net/forum?id=SHMj84U5SH
@inproceedings{ huang2024compression, title={Compression Represents Intelligence Linearly}, author={Yuzhen Huang and Jinghan Zhang and Zifei Shan and Junxian He}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=SHMj84U5SH} }
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): development of more advanced language models is essentially enhancing compression which facilitates intelligence. Despite such appealing discussions, little empirical evidence is present for the interplay between compression and intelligence. In this work, we examine the relationship between compression and intelligence in the context of LLMs, treating LLMs as data compressors. Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 31 public LLMs that vary in size and originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by benchmark scores -- almost **linearly** correlates with their ability to compress external text corpora. These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence. Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. This work advocates for the adoption of compression performance as a stable, flexible, and reliable metric for evaluating LLMs. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
Compression Represents Intelligence Linearly
[ "Yuzhen Huang", "Jinghan Zhang", "Zifei Shan", "Junxian He" ]
Conference
Poster
2404.09937
[ "https://github.com/hkust-nlp/llm-compression-intelligence" ]
https://huggingface.co/papers/2404.09937
3
27
1
4
[]
[ "hkust-nlp/llm-compression" ]
[]
1
166
null
https://openreview.net/forum?id=SGoVIC0u0f
@inproceedings{ lehnert2024beyond, title={Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping}, author={Lucas Lehnert and Sainbayar Sukhbaatar and DiJia Su and Qinqing Zheng and Paul McVay and Michael Rabbat and Yuandong Tian}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=SGoVIC0u0f} }
While Transformers have enabled tremendous progress in various application settings, such architectures still struggle with solving planning and sequential decision-making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks. This is accomplished by first designing a synthetic language that captures the computation performed by the $A^*$ search algorithm when solving a planning task. Then, an encoder-decoder Transformer model is trained to predict this language, resulting in a language model that can correctly solve novel planning tasks by generating $A^*$'s search dynamics. We fine tune this model to obtain a Searchformer, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7\% of the time, while using up to 26.8\% fewer search steps than our $A^*$ reference implementation. Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5-10$\times$ smaller model size and a 10$\times$ smaller training dataset. Lastly, we demonstrate how Searchformer scales to larger and more complex decision making tasks with improved percentage of solved tasks and shortened search dynamics.
Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
[ "Lucas Lehnert", "Sainbayar Sukhbaatar", "DiJia Su", "Qinqing Zheng", "Paul McVay", "Michael Rabbat", "Yuandong Tian" ]
Conference
Poster
2402.14083
[ "https://github.com/facebookresearch/searchformer" ]
-1
-1
-1
-1
[]
[]
[]
0
167
null
https://openreview.net/forum?id=S7NVVfuRv8
@inproceedings{ wu2024how, title={How Easily do Irrelevant Inputs Skew the Responses of Large Language Models?}, author={Siye Wu and Jian Xie and Jiangjie Chen and Tinghui Zhu and Kai Zhang and Yanghua Xiao}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=S7NVVfuRv8} }
By leveraging the retrieval of information from external knowledge databases, Large Language Models (LLMs) exhibit enhanced capabilities for accomplishing many knowledge-intensive tasks. However, due to the inherent flaws of current retrieval systems, there might exist irrelevant information within those retrieving top-ranked passages. In this work, we present a comprehensive investigation into the robustness of LLMs to different types of irrelevant information under various conditions. We initially introduce a framework to construct high-quality irrelevant information that ranges from semantically unrelated, partially related, and related to questions. Furthermore, our analysis demonstrates that the constructed irrelevant information not only scores highly on similarity metrics, being highly retrieved by existing systems, but also bears semantic connections to the context. Our investigation reveals that current LLMs still face challenges in discriminating highly semantically related information and can be easily distracted by these irrelevant yet misleading content. Besides, we also find that current solutions for handling irrelevant information have limitations in improving the robustness of LLMs to such distractions. All the resources are available on [GitHub](https://github.com/Di-viner/LLM-Robustness-to-Irrelevant-Information).
How Easily do Irrelevant Inputs Skew the Responses of Large Language Models?
[ "Siye Wu", "Jian Xie", "Jiangjie Chen", "Tinghui Zhu", "Kai Zhang", "Yanghua Xiao" ]
Conference
Poster
2404.03302
[ "https://github.com/di-viner/llm-robustness-to-irrelevant-information" ]
https://huggingface.co/papers/2404.03302
1
2
0
6
[]
[]
[]
1
168
null
https://openreview.net/forum?id=S4ZOkV1AHl
@inproceedings{ kwok2024evaluating, title={Evaluating Cultural Adaptability of a Large Language Model via Simulation of Synthetic Personas}, author={Louis Kwok and Michal Bravansky and Lewis Griffin}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=S4ZOkV1AHl} }
The success of Large Language Models (LLMs) in multicultural environments hinges on their ability to understand users' diverse cultural backgrounds. We measure this capability by having an LLM simulate human profiles representing various nationalities within the scope of a questionnaire-style psychological experiment. Specifically, we employ GPT-3.5 to reproduce reactions to persuasive news articles of 7,286 participants from 15 countries; comparing the results with a dataset of real participants sharing the same demographic traits. Our analysis shows that specifying a person's country of residence improves GPT-3.5's alignment with their responses. In contrast, using native language prompting introduces shifts that significantly reduce overall alignment, with some languages particularly impairing performance. These findings suggest that while direct nationality information enhances the model's cultural adaptability, native language cues do not reliably improve simulation fidelity and can detract from the model's effectiveness.
Evaluating Cultural Adaptability of a Large Language Model via Simulation of Synthetic Personas
[ "Louis Kwok", "Michal Bravansky", "Lewis Griffin" ]
Conference
Poster
2408.06929
[ "https://github.com/louiskwoklf/llms-cultural-adaptability" ]
-1
-1
-1
-1
[]
[]
[]
0
169
null
https://openreview.net/forum?id=S1XnUsqwr7
@inproceedings{ zhu2024deductive, title={Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought Reasoning}, author={Tinghui Zhu and Kai Zhang and Jian Xie and Yu Su}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=S1XnUsqwr7} }
Recent advancements have significantly augmented the reasoning capabilities of Large Language Models (LLMs) through various methodologies, especially chain-of-thought (CoT) reasoning. However, previous methods often struggle to address reasoning errors in intermediate steps, which can lead to accumulative errors. In this paper, we propose Deductive Beam Search (DBS), which seamlessly integrates CoT and deductive reasoning with step-wise beam search for LLMs. Our approach deploys a verifier, verifying the deducibility of a reasoning step and its premises, thus alleviating the error accumulation. Furthermore, we introduce a scalable and labor-free data construction method to amplify our model’s verification capabilities. Extensive experiments demonstrate that our approach significantly enhances the base performance of LLMs of various scales (7B, 13B, 70B, and ChatGPT) across 8 reasoning datasets from 3 diverse reasoning genres, including arithmetic, commonsense, and symbolic. Moreover, our analysis proves DBS’s capability of detecting diverse and subtle reasoning errors and robustness on different model scales.
Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought Reasoning
[ "Tinghui Zhu", "Kai Zhang", "Jian Xie", "Yu Su" ]
Conference
Poster
2401.17686
[ "https://github.com/osu-nlp-group/deductive-beam-search" ]
https://huggingface.co/papers/2401.17686
0
0
0
4
[]
[]
[]
1
170
null
https://openreview.net/forum?id=Rx3wC8sCTJ
@inproceedings{ ross2024llm, title={{LLM} economicus? Mapping the Behavioral Biases of {LLM}s via Utility Theory}, author={Jillian Ross and Yoon Kim and Andrew Lo}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Rx3wC8sCTJ} }
Humans are not homo economicus (i.e., rational economic beings). As humans, we exhibit systematic behavioral biases such as loss aversion, anchoring, framing, etc., which lead us to make suboptimal economic decisions. Insofar as such biases may be embedded in text data on which large language models (LLMs) are trained, to what extent are LLMs prone to the same behavioral biases? Understanding these biases in LLMs is crucial for deploying LLMs to support human decision-making. We propose utility theory-a paradigm at the core of modern economic theory-as an approach to evaluate the economic biases of LLMs. Utility theory enables the quantification and comparison of economic behavior against benchmarks such as perfect rationality or human behavior. To demonstrate our approach, we quantify and compare the economic behavior of a variety of open- and closed-source LLMs. We find that the economic behavior of current LLMs is neither entirely human-like nor entirely economicus-like. We also find that most current LLMs struggle to maintain consistent economic behavior across settings. Finally, we illustrate how our approach can measure the effect of interventions such as prompting on economic biases.
LLM economicus? Mapping the Behavioral Biases of LLMs via Utility Theory
[ "Jillian Ross", "Yoon Kim", "Andrew Lo" ]
Conference
Poster
2408.02784
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
171
null
https://openreview.net/forum?id=RLFca3arx7
@inproceedings{ gupta2024calm, title={{CALM} : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias}, author={Vipul Gupta and Pranav Narayanan Venkit and Hugo Lauren{\c{c}}on and Shomir Wilson and Rebecca J. Passonneau}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=RLFca3arx7} }
As language models (LMs) become increasingly powerful and widely used, it is important to quantify them for sociodemographic bias with potential for harm. Prior measures of bias are sensitive to perturbations in the templates designed to compare performance across social groups, due to factors such as low diversity or limited number of templates. Also, most previous work considers only one NLP task. We introduce Comprehensive Assessment of Language Models (CALM) for robust measurement of social biases. We use sixteen datasets for question-answering, sentiment analysis and natural language inference and filter them to produce 224 templates with high diversity (e.g., length, vocabulary). This helps us create a novel dataset of 78,400 prompts covering the three NLP tasks. Our empirical evaluation shows that CALM bias scores are more robust and far less sensitive than previous bias measurements to perturbations in the templates, such as synonym substitution, or to random subset selection of templates. We apply CALM to 20 large language models, and find that for 2 LM series, larger parameter models tend to be more biased than smaller ones. The T0 series is the least biased model families, of the 20 LLMs investigated here.
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
[ "Vipul Gupta", "Pranav Narayanan Venkit", "Hugo Laurençon", "Shomir Wilson", "Rebecca J. Passonneau" ]
Conference
Poster
2308.12539
[ "https://github.com/vipulgupta1011/calm" ]
-1
-1
-1
-1
[]
[]
[]
0
172
null
https://openreview.net/forum?id=RCdoMrg4I0
@inproceedings{ du2024chinese, title={Chinese Tiny {LLM}: Pretraining a Chinese-Centered Large Language Model}, author={Xeron Du and Zhouliang Yu and Songyang Gao and Ding Pan and Cheng Yuyang and Ziyang Ma and Ruibin Yuan and Xingwei Qu and Jiaheng Liu and Tianyu Zheng and Xinchen Luo and Guorui Zhou and Wenhu Chen and Ge Zhang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=RCdoMrg4I0} }
In this study, we introduce $\textbf{CT-LLM}$, a groundbreaking 2B large language model (LLM) that illustrates a pivotal shift towards prioritizing the Chinese language in the development of LLMs. Uniquely initiated from scratch, CT-LLM diverges from the conventional methodology by primarily incorporating Chinese textual data, utilizing an extensive corpus of 1,200 billion tokens, including 800 billion Chinese tokens and 400 billion English tokens. This strategic composition facilitates the model's exceptional proficiency in understanding and processing Chinese, a capability further enhanced through alignment techniques including supervised fine-tuning (SFT) and direct preference optimization (DPO). Demonstrating remarkable performance on the ChineseHardCase Benchmark, CT-LLM not only excels in Chinese language tasks but also showcases its adeptness in English through SFT. This research challenges the prevailing paradigm of training LLMs predominantly on English corpora and then adapting them to other languages, broadening the horizons for LLM training methodologies. By open-sourcing full-process of CT-LLM, we aim to foster further exploration and innovation within both the academic and industrial spheres, paving the way for more inclusive and versatile language models in the future.
Chinese Tiny LLM: Pretraining a Chinese-Centered Large Language Model
[ "Xeron Du", "Zhouliang Yu", "Songyang Gao", "Ding Pan", "Cheng Yuyang", "Ziyang Ma", "Ruibin Yuan", "Xingwei Qu", "Jiaheng Liu", "Tianyu Zheng", "Xinchen Luo", "Guorui Zhou", "Wenhu Chen", "Ge Zhang" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
173
null
https://openreview.net/forum?id=Qmq4zqdnWh
@inproceedings{ wadhwa2024using, title={Using Natural Language Explanations to Rescale Human Judgments}, author={Manya Wadhwa and Jifan Chen and Junyi Jessy Li and Greg Durrett}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Qmq4zqdnWh} }
The rise of large language models (LLMs) has brought a critical need for high-quality human-labeled data, particularly for processes like human feedback and evaluation. A common practice is to label data via consensus annotation over human judgments. However, annotators' judgments for subjective tasks can differ in many ways: they may reflect different qualitative judgments about an example, and they may be mapped to a labeling scheme in different ways. We show that these nuances can be captured by natural language explanations, and propose a method to rescale ordinal annotations and explanations using LLMs. Specifically, we feed annotators' Likert ratings and corresponding explanations into an LLM and prompt it to produce a numeric score anchored in a scoring rubric. These scores should reflect the annotators' underlying assessments of the example. The rubric can be designed or modified after annotation, and include distinctions that may not have been known when the original error taxonomy was devised. We explore our technique in the context of rating system outputs for a document-grounded question answering task, where LLMs achieve near-human performance. Our method rescales the raw judgments without impacting agreement and brings the scores closer to human judgments grounded in the same scoring rubric.
Using Natural Language Explanations to Rescale Human Judgments
[ "Manya Wadhwa", "Jifan Chen", "Junyi Jessy Li", "Greg Durrett" ]
Conference
Poster
2305.14770
[ "https://github.com/manyawadhwa/explanation_based_rescaling" ]
https://huggingface.co/papers/2305.14770
1
0
0
4
[]
[ "wadhma/EBR" ]
[]
1
174
null
https://openreview.net/forum?id=QdWhj0QZFw
@inproceedings{ liu2024llm, title={{LLM}360: Towards Fully Transparent Open-Source {LLM}s}, author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Timothy Baldwin and Eric P. Xing}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=QdWhj0QZFw} }
The recent surge in open-source Large Language Models (LLMs), such as LLaMA, Falcon, and Mistral, provides diverse options for AI practitioners and researchers. However, most LLMs have only released partial artifacts, such as the final model weights or inference code, and technical reports increasingly limit their scope to high-level design choices and surface statistics. These choices hinder progress in the field by degrading transparency into the training of LLMs and forcing teams to rediscover many details in the training process. We present **LLM360**, an initiative to fully open-source LLMs, which advocates for all training code and data, model checkpoints, and intermediate results to be made available to the community. The goal of LLM360 is to support open and collaborative AI research by making the end-to-end LLM training process transparent and reproducible by everyone. As a first step of LLM360, we release two 7B parameter LLMs pre-trained from scratch, Amber and Crystal, including their training code, data, intermediate checkpoints, and analyses. We are committed to continually pushing the boundaries of LLMs through this open-source effort. More large-scale and stronger models are underway and will be released in the future.
LLM360: Towards Fully Transparent Open-Source LLMs
[ "Zhengzhong Liu", "Aurick Qiao", "Willie Neiswanger", "Hongyi Wang", "Bowen Tan", "Tianhua Tao", "Junbo Li", "Yuqi Wang", "Suqi Sun", "Omkar Pangarkar", "Richard Fan", "Yi Gu", "Victor Miller", "Yonghao Zhuang", "Guowei He", "Haonan Li", "Fajri Koto", "Liping Tang", "Nikhil Ranjan", "Zhiqiang Shen", "Roberto Iriondo", "Cun Mu", "Zhiting Hu", "Mark Schulze", "Preslav Nakov", "Timothy Baldwin", "Eric P. Xing" ]
Conference
Poster
2312.06550
[ "https://github.com/llm360/analysis360" ]
-1
-1
-1
-1
[]
[]
[]
0
175
null
https://openreview.net/forum?id=QbCHlIqbDJ
@inproceedings{ fan2024from, title={From Narratives to Numbers: Valid Inference Using Language Model Predictions from Verbal Autopsies}, author={Shuxian Fan and Adam Visokay and Kentaro Hoffman and Stephen Salerno and Li Liu and Jeffrey T. Leek and Tyler McCormick}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=QbCHlIqbDJ} }
In settings where most deaths occur outside the healthcare system, verbal autopsies (VAs) are a common tool to monitor trends in causes of death (COD). VAs are interviews with a surviving caregiver or relative that are used to predict the decedent’s COD. Turning VAs into actionable insights for researchers and policymakers requires two steps (i) predicting likely COD using the VA interview and (ii) performing inference with predicted CODs (e.g. modeling the breakdown of causes by demographic factors using a sample of deaths). In this paper, we develop a method for valid inference using outcomes (in our case COD) predicted from free-form text using state-of-the-art NLP techniques. This method, which we call multiPPI++, extends recent work in “prediction-powered inference” to multinomial classification. We leverage a suite of NLP techniques for COD prediction and, through empirical analysis of VA data, we demonstrate the effectiveness of our approach in handling transportability issues. multiPPI++ recovers ground truth estimates, regardless of which NLP model produced predictions and regardless of whether they were produced by a more accurate predictor like GPT-4-32k or a less accurate predictor like KNN. Our findings demonstrate the practical importance of inference correction for public health decision-making and suggests that if inference tasks are the end goal, having a small amount of contextually relevant, high quality labeled data is essential regardless of the NLP algorithm.
From Narratives to Numbers: Valid Inference Using Language Model Predictions from Verbal Autopsies
[ "Shuxian Fan", "Adam Visokay", "Kentaro Hoffman", "Stephen Salerno", "Li Liu", "Jeffrey T. Leek", "Tyler McCormick" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
176
null
https://openreview.net/forum?id=QJvfpWSpWm
@inproceedings{ hassid2024the, title={The Larger the Better? Improved {LLM} Code-Generation via Budget Reallocation}, author={Michael Hassid and Tal Remez and Jonas Gehring and Roy Schwartz and Yossi Adi}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=QJvfpWSpWm} }
It is a common belief that large language models (LLMs) are better than smaller-sized ones. However, larger models also require significantly more time and compute during inference. This begs the question: what happens when both models operate under the same budget? (e.g., compute, run-time). To address this question, we analyze code generation LLMs of various sizes and make comparisons such as running a 70B model once vs. generating five outputs from a 13B model. We consider a standard unit-test setup, which can be used to select the correct output from the smaller model. Our findings reveal that the repeated use of smaller models can yield consistent improvements, with gains of up to 15% across five tasks. On the other hand, in scenarios where unit-tests are unavailable, a ranking-based selection of candidates from the smaller model falls short of the performance of a single output from larger ones. Our results highlight the potential of using smaller models instead of larger ones, and the importance of studying approaches for ranking LLM outputs.
The Larger the Better? Improved LLM Code-Generation via Budget Reallocation
[ "Michael Hassid", "Tal Remez", "Jonas Gehring", "Roy Schwartz", "Yossi Adi" ]
Conference
Poster
2404.00725
[ "https://github.com/slp-rl/budget-realloc" ]
-1
-1
-1
-1
[]
[]
[]
0
177
null
https://openreview.net/forum?id=Pvn1dKreZW
@inproceedings{ qian2024merge, title={''Merge Conflicts!''' Exploring the Impacts of External Knowledge Distractors to Parametric Knowledge Graphs}, author={Cheng Qian and Xinran Zhao and Tongshuang Wu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Pvn1dKreZW} }
Large language models (LLMs) acquire extensive knowledge during pre-training, known as their parametric knowledge. However, to remain up-to-date and align with human instructions, LLMs inevitably require external knowledge during interactions. This raises a crucial question: How will LLMs respond when external knowledge interferes with their parametric knowledge? To uncover the impacts systematically, we construct parametric knowledge graphs to reveal different LLM knowledge structures, and introduce external information through external knowledge distractors of varying degrees, methods, positions, and formats. Experiments on both closed and open-source models demonstrate that LLMs tend to believe in external knowledge sources, particularly when they direct conflict or make confounding changes within detailed contexts. We also discover while LLMs are sensitive to external knowledge veracity, they still get distracted by unrelated information. These findings highlight the mechanisms behind LLM's integration of external knowledge, even indirectly, during model-user interactions.
"Merge Conflicts!'" Exploring the Impacts of External Knowledge Distractors to Parametric Knowledge Graphs
[ "Cheng Qian", "Xinran Zhao", "Tongshuang Wu" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
178
null
https://openreview.net/forum?id=PPTrmvEnpW
@inproceedings{ karvonen2024emergent, title={Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models}, author={Adam Karvonen}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=PPTrmvEnpW} }
Language models have shown unprecedented capabilities, sparking debate over the source of their performance. Is it merely the outcome of learning syntactic patterns and surface level statistics, or do they extract semantics and a world model from the text? Prior work by Li et al. investigated this by training a GPT model on synthetic, randomly generated Othello games and found that the model learned an internal representation of the board state. We extend this work into the more complex domain of chess, training on real games and investigating our model's internal representations using linear probes and contrastive activations. The model is given no a priori knowledge of the game and is solely trained on next character prediction, yet we find evidence of internal representations of board state. We validate these internal representations by using them to make interventions on the model's activations and edit its internal board state. Unlike Li et al's prior synthetic dataset approach, our analysis finds that the model also learns to estimate latent variables like player skill to better predict the next character. We derive a player skill vector and add it to the model, improving the model's win rate by up to 2.6 times.
Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models
[ "Adam Karvonen" ]
Conference
Poster
2403.15498
[ "https://github.com/adamkarvonen/chess_llm_interpretability" ]
-1
-1
-1
-1
[]
[]
[]
0
179
null
https://openreview.net/forum?id=PKfAq8N4fK
@inproceedings{ wu2024agentkit, title={AgentKit: Structured {LLM} Reasoning with Dynamic Graphs}, author={Yue Wu and Yewen Fan and So Yeon Min and Shrimai Prabhumoye and Stephen Marcus McAleer and Ruslan Salakhutdinov and Yonatan Bisk and Yuanzhi Li and Tom Mitchell}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=PKfAq8N4fK} }
We propose an intuitive LLM prompting framework (AgentKit) for multifunctional agents. AgentKit offers a unified framework for explicitly constructing a complex "thought process" from simple natural language prompts. The basic building block in AgentKit is a **node**, containing a natural language prompt for a specific subtask. The user then puts together chains of nodes, in order to build a "thought process" for any problem, like stacking LEGO pieces. The chains of nodes can be designed to explicitly enforce a naturally **structured** "thought process". For example, for the task of writing a paper, one may start with the thought process of 1) identify a core message, 2) identify prior research gaps, etc. The nodes in AgentKit can be designed and combined in different ways to implement multiple advanced capabilities including on-the-fly hierarchical planning, reflection, and learning from interactions. In addition, due to the modular nature and the intuitive design to simulate explicit human thought process, a basic agent could be implemented as simple as a list of prompts for the subtasks and therefore could be designed and tuned by someone *without any programming experience*. Quantitatively, we show that agents designed through AgentKit achieve SOTA performance on Webshop and Crafter. These advances underscore AgentKit's potential in making LLM agents effective and accessible for a wider range of applications.
AgentKit: Structured LLM Reasoning with Dynamic Graphs
[ "Yue Wu", "Yewen Fan", "So Yeon Min", "Shrimai Prabhumoye", "Stephen Marcus McAleer", "Ruslan Salakhutdinov", "Yonatan Bisk", "Yuanzhi Li", "Tom Mitchell" ]
Conference
Poster
2404.11483
[ "https://github.com/holmeswww/agentkit" ]
-1
-1
-1
-1
[]
[]
[]
0
180
null
https://openreview.net/forum?id=PEQFHRUFca
@inproceedings{ zheng2024a, title={A Reparameterized Discrete Diffusion Model for Text Generation}, author={Lin Zheng and Jianbo Yuan and Lei Yu and Lingpeng Kong}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=PEQFHRUFca} }
This work studies discrete diffusion probabilistic models with applications to natural language generation. We derive an alternative yet equivalent formulation of the sampling from discrete diffusion processes and leverage this insight to develop a family of reparameterized discrete diffusion models. The derived generic framework is highly flexible, offers a fresh perspective of the generation process in discrete diffusion models, and features more effective training and decoding techniques. We conduct extensive experiments to evaluate the text generation capability of our model, demonstrating significant improvements over existing diffusion models.
A Reparameterized Discrete Diffusion Model for Text Generation
[ "Lin Zheng", "Jianbo Yuan", "Lei Yu", "Lingpeng Kong" ]
Conference
Poster
2302.05737
[ "https://github.com/hkunlp/reparam-discrete-diffusion" ]
https://huggingface.co/papers/2302.05737
0
0
0
4
[]
[]
[]
1
181
null
https://openreview.net/forum?id=OJaWBhh61C
@inproceedings{ liu2024best, title={Best Practices and Lessons Learned on Synthetic Data}, author={Ruibo Liu and Jerry Wei and Fangyu Liu and Chenglei Si and Yanzhe Zhang and Jinmeng Rao and Steven Zheng and Daiyi Peng and Diyi Yang and Denny Zhou and Andrew M. Dai}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=OJaWBhh61C} }
The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.
Best Practices and Lessons Learned on Synthetic Data
[ "Ruibo Liu", "Jerry Wei", "Fangyu Liu", "Chenglei Si", "Yanzhe Zhang", "Jinmeng Rao", "Steven Zheng", "Daiyi Peng", "Diyi Yang", "Denny Zhou", "Andrew M. Dai" ]
Conference
Poster
2404.07503
[ "" ]
https://huggingface.co/papers/2404.07503
8
29
1
11
[]
[]
[]
1
182
null
https://openreview.net/forum?id=NikbrdtYvG
@inproceedings{ pfau2024lets, title={Let{\textquoteright}s Think Dot by Dot: Hidden computation in transformer language models}, author={Jacob Pfau and William Merrill and Samuel R. Bowman}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=NikbrdtYvG} }
Chain-of-thought responses from language models improve performance across most benchmarks. However, it remains unclear to what extent these performance gains can be attributed to human-like task decomposition or simply the greater computation that additional tokens allow. We show that transformers can use meaningless filler tokens (e.g., ‘......’) in place of a chain of thought to solve two hard algorithmic tasks they could not solve when responding without intermediate tokens. However, we find empirically that learning to use filler tokens is difficult and requires specific, dense supervision to converge. We also provide a theoretical conjecture for the class of problems where filler tokens are useful in terms of the quantifier depth of a first-order formula. For problems satisfying this characterization, chain-of-thought tokens need not provide information about the intermediate computational steps involved in multi-token computations. In summary, our results show that additional tokens can provide computational benefits independent of token choice. The fact that intermediate tokens can act as filler tokens raises concerns about large language models engaging in unauditable, hidden computations that are increasingly detached from the observed chain-of-thought tokens.
Let’s Think Dot by Dot: Hidden computation in transformer language models
[ "Jacob Pfau", "William Merrill", "Samuel R. Bowman" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
183
null
https://openreview.net/forum?id=Nd950RAcCW
@inproceedings{ cheng2024multihop, title={Multi-hop Question Answering under Temporal Knowledge Editing}, author={Keyuan Cheng and Gang Lin and Haoyang Fei and Yuxuan Zhai and Lu Yu and Muhammad Asif Ali and Lijie Hu and Di Wang}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Nd950RAcCW} }
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models. However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit temporal contexts. To address this limitation, we propose a novel framework, namely TEMPoral knowLEdge augmented Multi-hop Question Answering (TEMPLE-MQA). Unlike previous methods, TEMPLE-MQA first constructs a time-aware graph (TAG) to store edit knowledge in a structured manner. Then, through our proposed inference path, structural retrieval, and joint reasoning stages, TEMPLE-MQA effectively discerns temporal contexts within the question query. Experiments on benchmark datasets demonstrate that TEMPLE-MQA significantly outperforms baseline models. Additionally, we contribute a new dataset, namely TKEMQA, which serves as the inaugural benchmark tailored specifically for MQA with temporal scopes.
Multi-hop Question Answering under Temporal Knowledge Editing
[ "Keyuan Cheng", "Gang Lin", "Haoyang Fei", "Yuxuan Zhai", "Lu Yu", "Muhammad Asif Ali", "Lijie Hu", "Di Wang" ]
Conference
Poster
2404.00492
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
184
null
https://openreview.net/forum?id=NV8yRJRET1
@inproceedings{ zala2024diagrammergpt, title={Diagrammer{GPT}: Generating Open-Domain, Open-Platform Diagrams via {LLM} Planning}, author={Abhay Zala and Han Lin and Jaemin Cho and Mohit Bansal}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=NV8yRJRET1} }
Text-to-image (T2I) generation has seen significant growth over the past few years. Despite this, there has been little work on generating diagrams with T2I models. A diagram is a symbolic/schematic representation that explains information using structurally rich and spatially complex visualizations (e.g., a dense combination of related objects, text labels, directional arrows/lines, etc.). Existing state-of-the-art T2I models often fail at diagram generation because they lack fine-grained object layout control when many objects are densely connected via complex relations such as arrows/lines, and also often fail to render comprehensible text labels. To address this gap, we present DiagrammerGPT, a novel two-stage text-to-diagram generation framework leveraging the layout guidance capabilities of LLMs to generate more accurate diagrams. In the first stage, we use LLMs to generate and iteratively refine ‘diagram plans’ (in a planner-auditor feedback loop). In the second stage, we use a diagram generator, DiagramGLIGEN, and a text label rendering module to generate diagrams (with clear text labels) following the diagram plans. To benchmark the text-to-diagram generation task, we introduce AI2D-Caption, a densely annotated diagram dataset built on top of the AI2D dataset. We show that our DiagrammerGPT framework produces more accurate diagrams, outperforming existing T2I models. We also provide comprehensive analysis, including open-domain diagram generation, multi-platform vector graphic diagram generation, human-in-the-loop editing, and multimodal planner/auditor LLMs.
DiagrammerGPT: Generating Open-Domain, Open-Platform Diagrams via LLM Planning
[ "Abhay Zala", "Han Lin", "Jaemin Cho", "Mohit Bansal" ]
Conference
Poster
2310.12128
[ "" ]
https://huggingface.co/papers/2310.12128
2
0
0
4
[]
[ "abhayzala/AI2D-Caption" ]
[]
1
185
null
https://openreview.net/forum?id=NPAQ6FKSmK
@inproceedings{ pan2024autonomous, title={Autonomous Evaluation and Refinement of Digital Agents}, author={Jiayi Pan and Yichi Zhang and Nicholas Tomlin and Yifei Zhou and Sergey Levine and Alane Suhr}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=NPAQ6FKSmK} }
We show that domain-general automatic evaluators can significantly improve the performance of agents for web navigation and device control. We experiment with multiple evaluation models that trade off between inference cost, modularity of design, and accuracy. We validate the performance of these models in several popular benchmarks for digital agents, finding between 74.4 and 92.9% agreement with oracle evaluation metrics. Finally, we use these evaluators to improve the performance of existing agents via fine-tuning and inference-time guidance. Without any additional supervision, we improve state-of-the-art performance by 29% on the popular benchmark WebArena, and achieve around 75% relative improvement in device control settings. We release our code and data at [https://github.com/Berkeley-NLP/Agent-Eval-Refine](https://github.com/Berkeley-NLP/Agent-Eval-Refine)
Autonomous Evaluation and Refinement of Digital Agents
[ "Jiayi Pan", "Yichi Zhang", "Nicholas Tomlin", "Yifei Zhou", "Sergey Levine", "Alane Suhr" ]
Conference
Poster
2404.06474
[ "https://github.com/berkeley-nlp/agent-eval-refine" ]
https://huggingface.co/papers/2404.06474
1
1
1
6
[]
[]
[ "Agent-Eval-Refine/Captioner" ]
1
186
null
https://openreview.net/forum?id=N5EYQSwW26
@inproceedings{ okazaki2024building, title={Building a Large Japanese Web Corpus for Large Language Models}, author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Rio Yokota and Sakae Mizuki}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=N5EYQSwW26} }
Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snapshots of approximately 63.4 billion pages crawled between 2020 and 2023). This corpus consists of approximately 312.1 billion characters (approximately 173 million pages), which is the largest of all available training corpora for Japanese LLMs, surpassing CC-100 (approximately 25.8 billion characters), mC4 (approximately 239.7 billion characters) and OSCAR 23.01 (approximately 74 billion characters). To confirm the quality of the corpus, we performed continual pre-training on Llama 2 7B, 13B, 70B, Mistral 7B v0.1, and Mixtral 8x7B as base LLMs and gained consistent (6.6-8.1 points) improvements on Japanese benchmark datasets. We also demonstrate that the improvement on Llama 2 13B brought from the presented corpus was the largest among those from other existing corpora.
Building a Large Japanese Web Corpus for Large Language Models
[ "Naoaki Okazaki", "Kakeru Hattori", "Hirai Shota", "Hiroki Iida", "Masanari Ohi", "Kazuki Fujii", "Taishi Nakamura", "Mengsay Loem", "Rio Yokota", "Sakae Mizuki" ]
Conference
Poster
2404.17733
[ "" ]
https://huggingface.co/papers/2404.17733
2
3
0
10
[ "tokyotech-llm/Swallow-7b-instruct-hf", "tokyotech-llm/Swallow-70b-instruct-hf", "tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1", "tokyotech-llm/Swallow-MS-7b-v0.1", "tokyotech-llm/Swallow-13b-instruct-hf", "tokyotech-llm/Swallow-7b-hf", "tokyotech-llm/Swallow-13b-hf", "tokyotech-llm/Swallow-7b-plus-hf", "tokyotech-llm/Llama-3-Swallow-8B-v0.1", "tokyotech-llm/Swallow-70b-hf", "tokyotech-llm/Llama-3-Swallow-70B-v0.1", "tokyotech-llm/Swallow-7b-NVE-instruct-hf", "tokyotech-llm/Swallow-70b-NVE-instruct-hf", "tokyotech-llm/Swallow-7b-NVE-hf", "tokyotech-llm/Swallow-70b-NVE-hf", "tokyotech-llm/Swallow-13b-NVE-hf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-4bits", "RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-8bits", "RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-4bits", "RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-8bits", "RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-70b-NVE-instruct-hf-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf", "RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-8B-v0.1-gguf", "RichardErkhov/tokyotech-llm_-_Swallow-7b-hf-gguf", "RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-70B-v0.1-gguf" ]
[]
[ "featherless-ai/try-this-model", "hayas/Swallow-13B-instruct", "Darok/Featherless-Feud", "mmnga/vocabviewer", "Granther/try-this-model", "emekaboris/try-this-model", "isonuma/marutenbo", "kmero/tokyotech-llm-Swallow-70b-instruct-hf", "Huaibo/tokyotech-llm-Swallow-7b-instruct-hf" ]
1
187
null
https://openreview.net/forum?id=MoitXWlXcS
@inproceedings{ godey2024why, title={Why do small language models underperform? Studying Language Model Saturation via the Softmax Bottleneck}, author={Nathan Godey and {\'E}ric Villemonte de la Clergerie and Beno{\^\i}t Sagot}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MoitXWlXcS} }
Recent advances in language modeling consist in pretraining highly parameterized neural networks on extremely large web-mined text corpora. Training and inference with such models can be costly in practice, which incentivizes the use of smaller counterparts. However, it has been observed that smaller models can suffer from saturation, characterized as a drop in performance at some advanced point in training followed by a plateau. In this paper, we find that such saturation can be explained by a mismatch between the hidden dimension of smaller models and the high rank of the target contextual probability distribution. This mismatch affects the performance of the linear prediction head used in such models through the well-known softmax bottleneck phenomenon. We measure the effect of the softmax bottleneck in various settings and estimate that models based on less than roughly 1000 hidden dimensions tend to adopt degenerate latent representations in late pretraining, which leads to reduced evaluation performance.
Why do small language models underperform? Studying Language Model Saturation via the Softmax Bottleneck
[ "Nathan Godey", "Éric Villemonte de la Clergerie", "Benoît Sagot" ]
Conference
Poster
2404.07647
[ "" ]
https://huggingface.co/papers/2404.07647
1
4
0
3
[]
[]
[]
1
188
null
https://openreview.net/forum?id=MmBQSNHKUl
@inproceedings{ le2024are, title={Are Language Models Robust Coreference Resolvers?}, author={Nghia T. Le and Alan Ritter}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MmBQSNHKUl} }
Recent work on extending coreference resolution across domains and languages relies on annotated data in both the target domain and language. At the same time, pre-trained large language models (LMs) have been reported to exhibit strong zero- and few-shot learning abilities across a wide range of NLP tasks. However, prior work mostly studied this ability using artificial sentence-level datasets such as the Winograd Schema Challenge. In this paper, we assess the feasibility of prompt-based coreference resolution by evaluating instruction-tuned language models on difficult, linguistically-complex coreference benchmarks (e.g., CoNLL-2012). We show that prompting for coreference can outperform current unsupervised coreference systems, although this approach appears to be reliant on high-quality mention detectors. Further investigations reveal that instruction-tuned LMs generalize surprisingly well across domains, languages, and time periods; yet continued fine-tuning of neural models should still be preferred if small amounts of annotated examples are available.
Are Language Models Robust Coreference Resolvers?
[ "Nghia T. Le", "Alan Ritter" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
189
null
https://openreview.net/forum?id=MkppMETE49
@inproceedings{ sharma2024information, title={Information Guided Regularization for Fine-tuning Language Models}, author={Mandar Sharma and Nikhil Muralidhar and Shengzhe Xu and Raquib Bin Yousuf and Naren Ramakrishnan}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MkppMETE49} }
The pretraining-fine-tuning paradigm has been the de facto strategy for transfer learning in modern language modeling. With the understanding that task adaptation in LMs is often a function of parameters shared across tasks, we argue that a more surgical approach to regularization needs to exist for smoother transfer learning. Towards this end, we investigate how the pretraining loss landscape is affected by these task-sensitive parameters through an information-theoretic lens. We then leverage the findings from our investigations to devise a novel approach to dropout for improved model regularization and better downstream generalization. This approach, named guided dropout, is both task & architecture agnostic and adds no computational overhead to the fine-tuning process. Through empirical evaluations, we showcase that our approach to regularization yields consistently better performance, even in scenarios of data paucity, compared to standardized baselines.
Information Guided Regularization for Fine-tuning Language Models
[ "Mandar Sharma", "Nikhil Muralidhar", "Shengzhe Xu", "Raquib Bin Yousuf", "Naren Ramakrishnan" ]
Conference
Poster
2406.14005
[ "https://github.com/mandar-sharma/guided-dropout" ]
-1
-1
-1
-1
[]
[]
[]
0
190
null
https://openreview.net/forum?id=MXLBXjQkmb
@inproceedings{ zhang2024negative, title={Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning}, author={Ruiqi Zhang and Licong Lin and Yu Bai and Song Mei}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MXLBXjQkmb} }
Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from the pre-trained model while preserving the model's utilities on other tasks. Several practical methods have recently been proposed for LLM unlearning, mostly based on gradient ascent (GA) on the loss of undesirable data. However, on certain unlearning tasks, these methods either fail to effectively unlearn the target data or suffer from catastrophic collapse --- a drastic degradation of the model's utilities. In this paper, we propose \emph{Negative Preference Optimization} (NPO), a simple alignment-inspired method that could efficiently and effectively unlearn a target dataset. We theoretically show that the progression toward catastrophic collapse by minimizing the NPO loss is exponentially slower than GA. Through experiments on synthetic data and the benchmark TOFU dataset, we demonstrate that NPO-based methods achieve a better balance between unlearning the undesirable data and maintaining the model's utilities. We also observe that NPO-based methods generate more sensible outputs than GA-based methods, whose outputs are often gibberish. Remarkably, on TOFU, NPO-based methods are the first to achieve reasonable unlearning results in forgetting 50\% (or more) of the training data, whereas existing methods already struggle with forgetting 10\% of training data.
Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning
[ "Ruiqi Zhang", "Licong Lin", "Yu Bai", "Song Mei" ]
Conference
Poster
2404.05868
[ "https://github.com/ucsb-nlp-chang/uld" ]
-1
-1
-1
-1
[]
[]
[]
0
191
null
https://openreview.net/forum?id=MNLAbfZwh2
@inproceedings{ elmaaroufi2024scenicnl, title={Scenic{NL}: Generating Probabilistic Scenario Programs from Natural Language}, author={Karim Elmaaroufi and Devan Shanker and Ana Cismaru and Marcell Vazquez-Chanlatte and Alberto Sangiovanni-Vincentelli and Matei Zaharia and Sanjit A. Seshia}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MNLAbfZwh2} }
For cyber-physical systems, including robotics and autonomous vehicles, mass deployment has been hindered by fatal errors that occur when operating in rare events. To better understand failure modes, companies meticulously recreate rare crash events in simulation, but current methods do not easily allow for exploring ”what if” scenarios which could reveal how accidents might have been avoided. We present ScenicNL, an AI system that generates probabilistic scenario programs from natural language. Given the abundance of documented failures of autonomous vehicles due to regulatory requirements, we apply ScenicNL to police crash reports, providing a data-driven approach to capturing and understanding these failures. By using a probabilistic language such as Scenic, we can clearly and concisely represent such scenarios of interest and easily ask “what if” questions. We demonstrate how commonplace prompting techniques with Large Language Models are incapable of generating code for low-resource languages such as Scenic. We propose an AI system via the composition of several prompting techniques to extract the reasoning abilities needed to model probability distributions around the uncertainty in the crash events. Our system then uses Constrained Decoding and tools such as a compiler and simulator to produce scenario programs in this low-resource setting. We evaluate our system on publicly available autonomous vehicle crash reports in California from the last five years and share insights into how we generate code that is both semantically meaningful and syntactically correct. Finally, we release our code and a collection of over 500 crash reports from the California Department of Motor Vehicles.
ScenicNL: Generating Probabilistic Scenario Programs from Natural Language
[ "Karim Elmaaroufi", "Devan Shanker", "Ana Cismaru", "Marcell Vazquez-Chanlatte", "Alberto Sangiovanni-Vincentelli", "Matei Zaharia", "Sanjit A. Seshia" ]
Conference
Poster
2405.03709
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
192
null
https://openreview.net/forum?id=MLD1cwfjUb
@inproceedings{ ebrahimi2024your, title={Your Context Is Not an Array: Unveiling Random Access Limitations in Transformers}, author={MohammadReza Ebrahimi and Sunny Panchal and Roland Memisevic}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MLD1cwfjUb} }
Despite their recent successes, Transformer-based large language models show surprising failure modes. A well-known example of such failure modes is their inability to length-generalize: solving problem instances at inference time that are longer than those seen during training. In this work, we further explore the root cause of this failure by performing a detailed analysis of model behaviors on the simple parity task. Our analysis suggests that length generalization failures are intricately related to a model's inability to perform random memory accesses within its context window. We present supporting evidence for this hypothesis by demonstrating the effectiveness of methodologies that circumvent the need for indexing or that enable random token access indirectly, through content-based addressing. We further show where and how the failure to perform random memory access manifests through attention map visualizations.
Your Context Is Not an Array: Unveiling Random Access Limitations in Transformers
[ "MohammadReza Ebrahimi", "Sunny Panchal", "Roland Memisevic" ]
Conference
Poster
2408.05506
[ "" ]
https://huggingface.co/papers/2408.05506
1
8
2
3
[]
[]
[]
1
193
null
https://openreview.net/forum?id=MI52iXSSNy
@inproceedings{ fu2024commonsenseti, title={Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?}, author={Xingyu Fu and Muyu He and Yujie Lu and William Yang Wang and Dan Roth}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=MI52iXSSNy} }
We present a novel task and benchmark for evaluating the ability of text-to-image(T2I) generation models to produce images that align with commonsense in real life, which we call Commonsense-T2I. Given two adversarial text prompts containing an identical set of action words with minor differences, such as *a lightbulb without electricity* vs. *a lightbulb with electricity*, we evaluate whether T2I models can conduct visual-commonsense reasoning, e.g. produce images that fit *The lightbulb is unlit* vs. *The lightbulb is lit* correspondingly. Commonsense-T2I presents an adversarial challenge, providing pairwise text prompts along with expected outputs. The dataset is carefully hand-curated by experts and annotated with fine-grained labels, such as commonsense type and likelihood of the expected outputs, to assist analyzing model behavior. We benchmark a variety of state-of-the-art (sota) T2I models and surprisingly find that, there is still a large gap between image synthesis and real life photos--even the DALL-E 3 model could only achieve 48.92% on Commonsense-T2I, and the stable diffusion XL model only achieves 24.92% accuracy. Our experiments show that GPT-enriched prompts cannot solve this challenge, and we include a detailed analysis about possible reasons for such deficiency. We aim for Commonsense-T2I to serve as a high-quality evaluation benchmark for T2I commonsense checking, fostering advancements in real life image generation.
Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?
[ "Xingyu Fu", "Muyu He", "Yujie Lu", "William Yang Wang", "Dan Roth" ]
Conference
Poster
2406.07546
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
194
null
https://openreview.net/forum?id=LzpaUxcNFK
@inproceedings{ vacareanu2024from, title={From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples}, author={Robert Vacareanu and Vlad Andrei Negru and Vasile Suciu and Mihai Surdeanu}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=LzpaUxcNFK} }
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance rivaling (or even outperforming) that of traditional supervised methods such as Random Forest, Bagging, or Gradient Boosting. For example, on the challenging Friedman \#2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting. We then investigate how well the performance of large language models scales with the number of in-context exemplars. We borrow from the notion of regret from online learning and empirically show that LLMs are capable of obtaining a sub-linear regret.
From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
[ "Robert Vacareanu", "Vlad Andrei Negru", "Vasile Suciu", "Mihai Surdeanu" ]
Conference
Poster
2404.07544
[ "https://github.com/robertvacareanu/llm4regression" ]
https://huggingface.co/papers/2404.07544
1
18
1
4
[]
[]
[]
1
195
null
https://openreview.net/forum?id=Lmjgl2n11u
@inproceedings{ mondorf2024beyond, title={Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language Models - A Survey}, author={Philipp Mondorf and Barbara Plank}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=Lmjgl2n11u} }
Large language models (LLMs) have recently shown impressive performance on tasks involving reasoning, leading to a lively debate on whether these models possess reasoning capabilities similar to humans. However, despite these successes, the depth of LLMs' reasoning abilities remains uncertain. This uncertainty partly stems from the predominant focus on task performance, measured through shallow accuracy metrics, rather than a thorough investigation of the models' reasoning behavior. This paper seeks to address this gap by providing a comprehensive review of studies that go beyond task accuracy, offering deeper insights into the models' reasoning processes. Furthermore, we survey prevalent methodologies to evaluate the reasoning behavior of LLMs, emphasizing current trends and efforts towards more nuanced reasoning analyses. Our review suggests that LLMs tend to rely on surface-level patterns and correlations in their training data, rather than on sophisticated reasoning abilities. Additionally, we identify the need for further research that delineates the key differences between human and LLM-based reasoning. Through this survey, we aim to shed light on the complex reasoning processes within LLMs.
Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language Models - A Survey
[ "Philipp Mondorf", "Barbara Plank" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
196
null
https://openreview.net/forum?id=LWfDcI6txJ
@inproceedings{ armengol-estap{\'e}2024forklift, title={Forklift: An Extensible Neural Lifter}, author={Jordi Armengol-Estap{\'e} and Rodrigo C. O. Rocha and Jackson Woodruff and Pasquale Minervini and Michael O'Boyle}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=LWfDcI6txJ} }
The escalating demand to migrate legacy software across different Instruction Set Architectures (ISAs) has driven the development of assembly-to-assembly translators to map between their respective assembly languages. However, the development of these tools requires substantial engineering effort. State-of-the-art approaches use lifting, a technique where source assembly code is translated to an architecture-independent intermediate representation (IR) — for example, the LLVM IR — and use a pre-existing compiler to recompile the IR to the target ISA. However, the hand-written rules these lifters employ are sensitive to the particular compiler and optimization level used to generate the code and require significant engineering effort to support each new ISA. We propose Forklift, the first neural lifter that learns how to translate assembly to LLVM IR using a token-level encoder-decoder Transformer. We show how to incrementally add support to new ISAs by fine tuning the assembly encoder and freezing the IR decoder, improving the overall accuracy and efficiency. We collect millions of parallel LLVM IR, x86, ARM, and RISC-V programs across compilers and optimization levels to train Forklift and set up an input/output-based accuracy harness. We evaluate Forklift on two challenging benchmark suites and translate 2.5x more x86 programs than a state-of-the-art hand-written lifter and 4.4x more x86 programs than GPT-4 as well as enabling translation from new ISAs.
Forklift: An Extensible Neural Lifter
[ "Jordi Armengol-Estapé", "Rodrigo C. O. Rocha", "Jackson Woodruff", "Pasquale Minervini", "Michael O'Boyle" ]
Conference
Poster
2404.16041
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
197
null
https://openreview.net/forum?id=LKEJPySnlt
@inproceedings{ zhong2024lory, title={Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training}, author={Zexuan Zhong and Mengzhou Xia and Danqi Chen and Mike Lewis}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=LKEJPySnlt} }
Mixture-of-experts (MoE) models facilitate efficient scaling; however, training the router network introduces the challenge of optimizing a non-differentiable, discrete objective. Recently, a fully-differentiable MoE architecture SMEAR was proposed (Muqeeth et al., 2023), which softly merges experts in the parameter space. Nevertheless, its effectiveness was only demonstrated in downstream fine-tuning on classification tasks. In this paper, we present Lory, a novel approach that scales such architectures to autoregressive language model pre-training. Lory introduces two key techniques: (1) a causal segment routing strategy that achieves high efficiency for expert merging operations while preserving the autoregressive nature of language models; (2) a similarity-based data batching method that encourages expert specialization by grouping similar documents in training instances. We pre-train a series of Lory models from scratch on 150B tokens, with up to 32 experts and 30B (1.5B active) parameters. Experimental results show significant performance gains over parameter-matched dense models in both perplexity (+13.9%) and a variety of downstream tasks (+1.5%-11.1%). Despite segment-level routing, Lory models achieve competitive performance compared to state-of-the-art MoE models with token-level routing. We further demonstrate that the trained experts capture domain-level specialization without supervision. Our work highlights the potential of fully-differentiable MoE architectures for language model pre-training and advocates future research in this area.
Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training
[ "Zexuan Zhong", "Mengzhou Xia", "Danqi Chen", "Mike Lewis" ]
Conference
Poster
2405.03133
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
198
null
https://openreview.net/forum?id=LFfktMPAci
@inproceedings{ ross2024what, title={What makes a good metric? Evaluating automatic metrics for text-to-image consistency}, author={Candace Ross and Melissa Hall and Adriana Romero-Soriano and Adina Williams}, booktitle={First Conference on Language Modeling}, year={2024}, url={https://openreview.net/forum?id=LFfktMPAci} }
Language models are increasingly being incorporated as components in larger AI systems for various purposes, from prompt optimization to automatic evaluation. In this work, we analyze the construct validity of four recent, commonly used methods for measuring text-to-image consistency---CLIPScore, TIFA, VPEval, and DSG---which rely on language models and/or VQA models as components. We define construct validity for text-image consistency metrics as a set of desiderata that text-image consistency metrics should have, and find that no tested metric satisfies all of them. We find that metrics lack sufficient sensitivity to language and visual properties. Next, we find that TIFA, VPEval and DSG contribute novel information above and beyond CLIPScore, but also that they correlate highly with each other. We also ablate different aspects of the text-image consistency metrics and find that not all model components are strictly necessary, also a symptom of insufficient sensitivity to visual information. Finally, we show that all three VQA-based metrics likely rely on familiar text shortcuts (such as yes-bias in QA) that call their aptitude as quantitative evaluations of model performance into question.
What makes a good metric? Evaluating automatic metrics for text-to-image consistency
[ "Candace Ross", "Melissa Hall", "Adriana Romero-Soriano", "Adina Williams" ]
Conference
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
199