File size: 7,704 Bytes
23c9dde
 
 
 
 
3baf815
23c9dde
3baf815
23c9dde
3baf815
23c9dde
 
 
3baf815
23c9dde
3baf815
23c9dde
3baf815
165d700
 
 
3baf815
165d700
3baf815
165d700
3baf815
165d700
 
 
3baf815
165d700
3baf815
165d700
3baf815
58c4a9f
 
 
3baf815
58c4a9f
3baf815
58c4a9f
3baf815
58c4a9f
 
 
3baf815
58c4a9f
3baf815
58c4a9f
3baf815
6e34664
 
 
 
 
 
3baf815
 
10039a5
3baf815
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3319087
3baf815
 
 
 
3319087
3baf815
3319087
3baf815
 
 
 
3319087
3baf815
 
3319087
3baf815
 
 
3319087
 
 
 
 
 
 
 
 
 
 
 
 
3baf815
 
 
 
 
 
 
 
 
 
 
3319087
3baf815
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10039a5
 
 
 
 
 
3baf815
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
configs:
- config_name: narrativeqa
  data_files:
  - split: corpus
    path: narrativeqa/corpus.jsonl
  - split: queries
    path: narrativeqa/queries.jsonl
  - split: qrels
    path: narrativeqa/qrels.jsonl
- config_name: summ_screen_fd
  data_files:
  - split: corpus
    path: summ_screen_fd/corpus.jsonl
  - split: queries
    path: summ_screen_fd/queries.jsonl
  - split: qrels
    path: summ_screen_fd/qrels.jsonl
- config_name: qmsum
  data_files:
  - split: corpus
    path: qmsum/corpus.jsonl
  - split: queries
    path: qmsum/queries.jsonl
  - split: qrels
    path: qmsum/qrels.jsonl
- config_name: 2wikimqa
  data_files:
  - split: corpus
    path: 2wikimqa/corpus.jsonl
  - split: queries
    path: 2wikimqa/queries.jsonl
  - split: qrels
    path: 2wikimqa/qrels.jsonl
- config_name: passkey
  data_files:
  - split: corpus
    path: passkey/corpus.jsonl
  - split: queries
    path: passkey/queries.jsonl
  - split: qrels
    path: passkey/qrels.jsonl
- config_name: needle
  data_files:
  - split: corpus
    path: needle/corpus.jsonl
  - split: queries
    path: needle/queries.jsonl
  - split: qrels
    path: needle/qrels.jsonl
language:
- en
tags:
- Long Context
size_categories:
- 1K<n<10K
---
## Introduction
This repo contains the LongEmbed benchmark proposed in the paper [LongEmbed: Extending Embedding Models for Long Context Retrieval](https://arxiv.org/abs/2404.12096). Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li, arxiv 2024.04. Github Repo for LongEmbed: https://github.com/dwzhu-pku/LongEmbed.

**LongEmbed** is designed to benchmark long context retrieval. It includes two synthetic tasks and four real-world tasks, featuring documents of varying lengths and dispersed target information. It has been integrated into [MTEB](https://github.com/embeddings-benchmark/mteb) for the convenience of evaluation.

## How to use it?

#### Loading Data
LongEmbed contains six datasets: NarrativeQA, QMSum, 2WikiMultihopQA, SummScreenFD, Passkey, and Needle. Each dataset has three splits: corpus, queries, and qrels. The `corpus.jsonl` file contains the documents, the `queries.jsonl` file contains the queries, and the `qrels.jsonl` file describes the relevance. To spefic split of load each dataset, you may use:

```python
from datasets import load_dataset

# dataset_name in ["narrativeqa", "summ_screen_fd", "qmsum", "2wikimqa", "passkey", "needle"]
# split_name in ["corpus", "queries", "qrels"]
data_list = load_dataset(path="dwzhu/LongEmbed", name="dataset_name", split="split_name")
```

#### Evaluation

The evaluation of LongEmbed can be easily conducted using MTEB (>=1.6.22). For the four real tasks, you can evaluate as follows:

```python
from mteb import MTEB
retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
output_dict = {}
evaluation = MTEB(tasks=retrieval_task_list)
#TODO load the model before evaluation
results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
for key, value in results.items():
	split = "test" if "test" in value else "validation"
	output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
print(output_dict)
```

For the two synthetic tasks, since we examine a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens, an additional parameter of `context_length` is required. You may evaluate as follows:

```python
from mteb import MTEB
needle_passkey_task_list = ["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"]
output_dict = {}
context_length_list = [256, 512, 1024, 2048, 4096, 8192, 16384, 32768]
evaluation = MTEB(tasks=needle_passkey_task_list)
#TODO load the model before evaluation
results = evaluation.run(model, output_folder=args.output_dir, overwrite_results=True,batch_size=args.batch_size,verbosity=0)
for key, value in results.items():
	needle_passkey_score_list = []
	for ctx_len in context_length_list:
		needle_passkey_score_list.append([ctx_len, value[f"test_{ctx_len}"]["ndcg_at_1"]])
	needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list)])
	output_dict[key] = {item[0]: item[1] for item in needle_passkey_score_list}
print(output_dict)
```

## Task Description

LongEmbed includes 4 real-world retrieval tasks curated from long-form QA and summarization. Note that for QA and summarization datasets, we use the questions and summaries as queries, respectively.

- [NarrativeQA](https://huggingface.co/datasets/narrativeqa): A QA dataset comprising long stories averaging 50,474 words and corresponding questions about specific content such as characters, events. We adopt the `test` set of the original dataset.
- [2WikiMultihopQA](https://huggingface.co/datasets/THUDM/LongBench/viewer/2wikimqa_e): A multi-hop QA dataset featuring questions with up to 5 hops, synthesized through manually designed templates to prevent shortcut solutions. We use the `test` split of the length-uniformly sampled version from [LongBench](https://huggingface.co/datasets/THUDM/LongBench).
- [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set.
- [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls).

We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents.


## Task Statistics

| Dataset | Domain | # Queries | # Docs | Avg. Query Words | Avg. Doc Words |
|---------|--------|-----------|--------|------------------|----------------|
| NarrativeQA | Literature, File | 10,449 | 355 | 9 | 50,474 |
| QMSum | Meeting | 1,527 | 197 | 71 | 10,058 |
| 2WikimQA | Wikipedia | 300 | 300 | 12 | 6,132 |
| SummScreenFD | ScreenWriting | 336 | 336 | 102 | 5,582 |
| Passkey | Synthetic | 400 | 800 | 11 | - |
| Needle | Synthetic | 400 | 800 | 7 | - |


## Citation
If you find our paper helpful, please consider cite as follows:

```
@article{zhu2024longembed,
  title={LongEmbed: Extending Embedding Models for Long Context Retrieval},
  author={Zhu, Dawei and Wang, Liang and Yang, Nan and Song, Yifan and Wu, Wenhao and Wei, Furu and Li, Sujian},
  journal={arXiv preprint arXiv:2404.12096},
  year={2024}
}
```