Papers
arxiv:2407.03651

Evaluating Language Model Context Windows: A "Working Memory" Test and Inference-time Correction

Published on Jul 4
· Submitted by fredsala on Jul 9
Authors:
,

Abstract

Large language models are prominently used in real-world applications, often tasked with reasoning over large volumes of documents. An exciting development in this space is models boasting extended context capabilities, with some accommodating over 2 million tokens. Such long context model capabilities remain uncertain in production systems, motivating the need to benchmark their performance on real world use cases. We address this challenge by proposing SWiM, an evaluation framework that addresses the limitations of standard tests. Testing the framework on eight long context models, we find that even strong models such as GPT-4 and Claude 3 Opus degrade in performance when information is present in the middle of the context window (lost-in-the-middle effect). Next, in addition to our benchmark, we propose medoid voting, a simple, but effective training-free approach that helps alleviate this effect, by generating responses a few times, each time randomly permuting documents in the context, and selecting the medoid answer. We evaluate medoid voting on single document QA tasks, achieving up to a 24% lift in accuracy.

Community

Paper author Paper submitter

Long-context language models (LCMs) are currently being explored by the community and many businesses for their ability to perform tasks over very large documents and document repos. But how good are these LCMs when we start to fill up the windows with more and more data? To date, the most commonly used test for LCM capabilities is the “Needle in a Haystack Test” (NIAH).

Paper highlights:

  • NIAH can miss important LCM deficiencies, most notably the “lost in the middle” effect.
  • These same deficiencies can be uncovered by an alternative test we have developed, called the “Snorkel Working Memory Test” (SWiM).
  • We show how to elegantly correct for the types of deficiencies uncovered by SWiM and provide open-source code to do so.

More details:
https://github.com/snorkel-ai/long-context-eval

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.03651 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.03651 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.03651 in a Space README.md to link it from this page.

Collections including this paper 3