--- license: apache-2.0 configs: - config_name: kv75 data_files: - split: test path: "data/kv75.jsonl" - config_name: kv140 data_files: - split: test path: "data/kv140.jsonl" - config_name: kv300 data_files: - split: test path: "data/kv300.jsonl" - config_name: qa10 data_files: - split: test path: "data/qa10.jsonl" - config_name: qa20 data_files: - split: test path: "data/qa20.jsonl" - config_name: qa30 data_files: - split: test path: "data/qa30.jsonl" task_categories: - question-answering tags: - lost-in-the-middle size_categories: - n<1K --- # Datasets for Lost In The Middle This repository contains datasets used in the paper ["Lost in the Middle: How Language Models Use Long Contexts"](https://arxiv.org/abs/2307.03172), focusing on multi-document question answering and key-value retrieval tasks. ## Datasets Overview The datasets provided are as follows: - **Key-Value Retrieval Datasets** - `kv75`: Key-Value pairs with 75 keys. - `kv140`: Key-Value pairs with 140 keys. - `kv300`: Key-Value pairs with 300 keys. - **Multi-Document Question Answering Datasets** - `qa10`: Questions with answers derived from 10 documents. - `qa20`: Questions with answers derived from 20 documents. - `qa30`: Questions with answers derived from 30 documents. ## Loading the Data You can load these datasets using the Hugging Face `datasets` library: ```python from datasets import load_dataset ### Example for loading the kv75 dataset dataset = load_dataset("bzantium/LITM", "kv75") ### Example for loading the qa20 dataset dataset = load_dataset("bzantium/LITM", "qa20") ```