license: mit
task_categories:
- question-answering
language:
- en
tags:
- context
- qa
- long
- benchmark
- llm
size_categories:
- 1K<n<10K
pretty_name: ContextStretchQA
Below is a structured, professional‐tone description of the “QA Increasing Context Length” dataset. You can use this text as a README, a data card, or incorporate it directly into documentation.
QA Increasing Context Length Dataset
1. Overview
The QA Increasing Context Length dataset is designed to facilitate benchmarking and research on question‐answering (QA) systems as the size of the input context grows. It compiles QA examples drawn from multiple LongBench subsets, each bucketed by ascending context length (measured in tokens). Researchers can use this dataset to evaluate how modern language models and retrieval‐augmented systems handle progressively larger contexts (from 3 K tokens up to 32 K tokens) in terms of accuracy, latency, memory usage, and robustness.
Intended purpose
- To measure QA performance (e.g., exact match, F1) under different context‐length regimes.
- To assess inference latency, throughput, and resource utilization when models process long documents.
- To compare retrieval strategies or memory‐efficient attention mechanisms as context size increases.
Key features
- A single CSV (
longbench_all_buckets_100.csv) containing examples from five context‐length buckets: 3 K, 4 K, 8 K, 16 K, and 32 K tokens. - Each row includes a complete (potentially multi‐paragraph) passage, a target question, and its ground‐truth answer, along with metadata fields that facilitate grouping, filtering, or statistical analysis.
- Examples are drawn from diverse domains (scientific articles, technical reports, web pages, etc.), as indicated by the
datasetfield.
- A single CSV (
2. Dataset Structure
- File format: Comma‐separated values (UTF-8 encoded)
- Number of rows: Varies by bucket (typically 100 examples per bucket)
- Context lengths: 5 (
“3k”,“4k”,“8k”,“16k”,“32k”)
2.1. Column Descriptions
Each row (example) has six columns:
| Column Name | Type | Description |
|---|---|---|
| context | string |
A (long) text passage whose token count falls into one of the predefined buckets (3 K – 32 K). |
| question | string |
A natural‐language question referring to information contained in context. |
| answer | string |
The ground‐truth answer (text span or summary) extracted from the context. |
| length | int |
The exact token count of the context (as measured by a standard tokenizer, e.g., T5/BPE). |
| dataset | string |
The original LongBench subset (e.g., “scitldr”, “arxiv”, “pubmed”, etc.) from which the example was drawn. |
| context_range | string |
One of "2k", "4k", "8k", "16k", or "32k". Indicates the bucket into which length falls. |
Context buckets (
context_range)"3k": 1 500 – 3 000 tokens (approximate; exact boundaries may vary)"4k": 3 000 – 3 999 tokens"8k": 4 000 – 7 999 tokens"16k": 8 000 – 15 999 tokens"32k": 16 000 – 31 999 tokens
Note: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s
lengthfield indicates the precise token count.
3. Loading
If this collection has been published under a Hugging Face dataset ID (for example, slinusc/qa_increasing_context_length), you can load it directly:
from datasets import load_dataset
# Replace with the actual HF dataset ID if different
dataset = load_dataset("slinusc/qa_increasing_context_length")
# Print overall structure and splits
print(dataset)
# Inspect column names in the “train” split
print(dataset["train"].column_names)
["context", "question", "answer", "length", "dataset", "context_range"]
4. Citation & License
- If you plan to publish results using this dataset, please refer to the original LongBench publication (LongBench: A Bedrock-Level Benchmark for Foundation Models) and cite the specific subset(s) from which examples were drawn.
- Check the Hugging Face hub (dataset card) for detailed licensing information. Typically, LongBench subsets carry permissive licenses for research use, but always verify at https://huggingface.co/datasets/… before redistribution.