File size: 12,579 Bytes
0f95a4b
 
896083b
7be4ae2
 
 
 
 
896083b
7be4ae2
 
896083b
7be4ae2
896083b
7be4ae2
e8c4357
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f95a4b
896083b
44d4346
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7be4ae2
 
44d4346
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7be4ae2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
---
license: mit
task_categories:
  - question-answering
  - translation
  - summarization
  - text-classification
  - text-retrieval
language:
  - en
  - zh
tags:
  - Long Context
size_categories:
  - 1K<n<10K
configs:
- config_name: mnds-news_semantic-multiple
  data_files:
  - split: test
    path: classification/mnds-news_semantic-multiple.jsonl 
- config_name: thucnews_explicit-single
  data_files:
  - split: test
    path: classification/thucnews_explicit-single.jsonl
- config_name: mnds-news_explicit-multiple
  data_files:
  - split: test
    path: classification/mnds-news_explicit-multiple.jsonl
- config_name: thucnews_explicit-multiple
  data_files:
  - split: test
    path: classification/thucnews_explicit-multiple.jsonl
- config_name: mnds-news_explicit-single
  data_files:
  - split: test
    path: classification/mnds-news_explicit-single.jsonl
- config_name: bigpatent_global_cls
  data_files:
  - split: test
    path: classification/bigpatent_global_cls.jsonl
- config_name: marc
  data_files:
  - split: test
    path: classification/marc.jsonl
- config_name: thucnews_semantic-multiple
  data_files:
  - split: test
    path: classification/thucnews_semantic-multiple.jsonl
- config_name: online-shopping
  data_files:
  - split: test
    path: classification/online-shopping.jsonl
- config_name: wikitext-103
  data_files:
  - split: test
    path: nli/wikitext-103.jsonl
- config_name: wiki2019zh
  data_files:
  - split: test
    path: nli/wiki2019zh.jsonl
- config_name: tedtalks-zh2en
  data_files:
  - split: test
    path: translation/tedtalks-zh2en.jsonl
- config_name: news-commentary-zh2en
  data_files:
  - split: test
    path: translation/news-commentary-zh2en.jsonl
- config_name: open-subtitles-zh2en
  data_files:
  - split: test
    path: translation/open-subtitles-zh2en.jsonl
- config_name: open-subtitles-en2zh
  data_files:
  - split: test
    path: translation/open-subtitles-en2zh.jsonl
- config_name: news-commentary-en2zh
  data_files:
  - split: test
    path: translation/news-commentary-en2zh.jsonl
- config_name: tedtalks-en2zh
  data_files:
  - split: test
    path: translation/tedtalks-en2zh.jsonl
- config_name: cnnnews
  data_files:
  - split: test
    path: summarization/cnnnews.jsonl
- config_name: clts
  data_files:
  - split: test
    path: summarization/clts.jsonl
- config_name: cnewsum
  data_files:
  - split: test
    path: summarization/cnewsum.jsonl
- config_name: booksum
  data_files:
  - split: test
    path: summarization/booksum.jsonl
- config_name: cepsum
  data_files:
  - split: test
    path: summarization/cepsum.jsonl
- config_name: pubmed
  data_files:
  - split: test
    path: summarization/pubmed.jsonl
- config_name: lcsts
  data_files:
  - split: test
    path: summarization/lcsts.jsonl
- config_name: news2016
  data_files:
  - split: test
    path: summarization/news2016.jsonl
- config_name: arxiv
  data_files:
  - split: test
    path: summarization/arxiv.jsonl
- config_name: wikihow
  data_files:
  - split: test
    path: summarization/wikihow.jsonl
- config_name: bigpatent_global_sum
  data_files:
  - split: test
    path: summarization/bigpatent_global_sum.jsonl
- config_name: ncls
  data_files:
  - split: test
    path: summarization/ncls.jsonl
- config_name: drcd_semantic-single
  data_files:
  - split: test
    path: qa/drcd_semantic-single.jsonl
- config_name: duorc
  data_files:
  - split: test
    path: qa/duorc.jsonl
- config_name: nq-open
  data_files:
  - split: test
    path: qa/nq-open.jsonl
- config_name: newsqa
  data_files:
  - split: test
    path: qa/newsqa.jsonl
- config_name: triviaqa
  data_files:
  - split: test
    path: qa/triviaqa.jsonl
- config_name: c3
  data_files:
  - split: test
    path: qa/c3.jsonl
- config_name: dureader
  data_files:
  - split: test
    path: qa/dureader.jsonl
- config_name: hotpotqa
  data_files:
  - split: test
    path: qa/hotpotqa.jsonl
- config_name: wow
  data_files:
  - split: test
    path: topic_retrieval/wow.jsonl
- config_name: drcd_explicit-single
  data_files:
  - split: test
    path: topic_retrieval/drcd_explicit-single.jsonl
---

## Introduction

**M4LE** is a **M**ulti-ability, **M**ulti-range, **M**ulti-task, bilingual benchmark for long-context evaluation. We categorize long-context understanding into five distinct abilities by considering whether it is required to identify single or multiple spans in long contexts based on explicit or semantic hints. Specifically, these abilities are explicit single-span, semantic single-span, explicit multiple-span, semantic multiple-span, and global. Different from previous long-context benchmark that simply compile from a set of existing long NLP benchmarks, we introduce an automated method to transform short-sequence tasks into a comprehensive long-sequence scenario encompassing all these capabilities.

M4LE consists of 36 tasks, covering 11 task types and 12 domains. For each task, we construct 200 instances for each context length bucket (1K, 2K, 4K, 6K, 8K, 12K, 16K, 24K, 32K). Due to computation and cost constraints, our paper evaluated 11 well-established LLMs on instances up to the 8K context length bucket. For more details, please refer to the paper available at <https://arxiv.org/abs/2310.19240>. You can also explore the Github page at <https://github.com/KwanWaiChung/M4LE>.

## Usage

You can load the dataset by specifying the task name:

```python
from datasets import load_dataset
tasks = [
    "arxiv",
    "bigpatent_global_cls",
    "bigpatent_global_sum",
    "booksum",
    "c3",
    "cepsum",
    "clts+",
    "cnewsum",
    "cnnnews",
    "drcd_explicit-single",
    "drcd_semantic-single",
    "duorc",
    "dureader",
    "hotpotqa",
    "lcsts",
    "marc",
    "mnds-news_explicit-single",
    "mnds-news_explicit-multiple",
    "mnds-news_semantic-multiple",
    "ncls",
    "news-commentary-en2zh",
    "news-commentary-zh2en",
    "news2016",
    "newsqa",
    "nq-open",
    "online-shopping",
    "open-subtitles-en2zh",
    "open-subtitles-zh2en",
    "pubmed",
    "tedtalks-en2zh",
    "tedtalks-zh2en",
    "thucnews_explicit-single",
    "thucnews_explicit-multiple",
    "thucnews_semantic-multiple",
    "triviaqa",
    "wiki2019zh",
    "wikihow",
    "wikitext-103",
    "wow",
]

for task in tasks:
    data = load_dataset('wckwan/M4LE', task, split='test')
```

## Format

Each testing instance follows this format:

```yaml
{
    "instruction": "<task description>",
    "input": "<task input with one-shot example>",
    "answers": ["<answer1>", "<answer2>"],
    "input_length": <int, number of words in instruction and input separated by space>,
    "total_length": <int, number of words in instruction, input and gold answer separated by space>,
    "length_bucket": <int, the length bucket to which this instance belongs>
}
```

## Tasks

Here is the full list for the tasks with their descriptions. More details about these tasks, please refer to the paper .

Ability           | Task Name                                   | Task Type  | Language | Description
----------------- | ------------------------------------------- | ---------- | -------- | ------------------------------------------------------------------
Explicit Single   | mnds-news_explicit-single                   | CLS + RET  | En       | Classify a specified news article.
Explicit Single   | thucnews_explicit-single                    | CLS + RET  | Zh       | Classify a specified news article.
Explicit Single   | newsqa                                      | QA + RET   | En       | Answer a question based on a specified news article.
Explicit Single   | c3                                          | QA + RET   | Zh       | Answer a multi-choice question based on a textbook extract.
Explicit Single   | wow                                         | RET        | En       | Return the ID of the article related to a specified topic.
Explicit Single   | drcd_explicit-single                        | RET        | Zh       | Return the ID of the article related to a specified topic.
Explicit Single   | cnnnews                                     | SUM + RET  | En       | Summarize a specified news article.
Explicit Single   | cepsum                                      | SUM + RET  | Zh       | Summarize a specified product description.
Explicit Single   | lcsts                                       | SUM + RET  | Zh       | Summarize a specified news article.
Explicit Single   | ncls                                        | SUM + RET  | En, Zh   | Summarize a specified news article.
Explicit Multiple | mnds-news_explicit-multiple                 | CLS + RET  | En       | Return the IDs of all the articles belong to a specified class.
Explicit Multiple | thucnews_explicit-multiple                  | CLS + RET  | Zh       | Return the IDs of all the articles belong to a specified class.
Explicit Multiple | marc                                        | CLS + RET  | En, Zh   | Return the IDs of all the positive product reviews.
Explicit Multiple | online-shopping                             | CLS + RET  | Zh       | Return the IDs of all the positive product reviews.
Semantic Single   | wikitext-103                                | NLI + RET  | En       | Return the ID of the paragraph that continues a query paragraph.
Semantic Single   | wiki2019zh                                  | NLI + RET  | Zh       | Return the ID of the paragraph that continues a query paragraph.
Semantic Single   | duorc                                       | QA         | En       | Answer a question based on multiple movie plots.
Semantic Single   | nq-open                                     | QA         | En       | Answer a question based on multiple wikipedia paragraphs.
Semantic Single   | dureader                                    | QA         | Zh       | Answer a question based on multiple web snippets.
Semantic Single   | drcd_semantic-single                        | QA         | Zh       | Answer a question based on multiple wikipedia paragraphs.
Semantic Single   | wikihow                                     | SUM + RET  | En       | Summarize an article based on a given topic.
Semantic Single   | news2016                                    | SUM + RET  | Zh       | Summarize a news article based on a given title.
Semantic Single   | tedtalks-en2zh/tedtalks-zh2en               | TRAN + RET | En, Zh   | Translate a Ted Talk transcript based on a given title.
Semantic Multiple | mnds-news_semantic-multiple                 | CLS + CNT  | En       | Return the number of news articles belonging to a specified class.
Semantic Multiple | thucnews_semantic-multiple                  | CLS + CNT  | Zh       | Return the number of news articles belonging to a specified class.
Semantic Multiple | hotpotqa                                    | QA         | En       | Answer a question based on multiple wikipedia paragraphs.
Global            | bigpatent_global_cls                        | CLS        | En       | Classify a patent document.
Global            | triviaqa                                    | QA         | En       | Answer a question based on a web snippet.
Global            | arxiv                                       | SUM        | En       | Summarize an academic paper.
Global            | bigpatent_global_sum                        | SUM        | En       | Summarize a patent document.
Global            | pubmed                                      | SUM        | En       | Summarize a medical paper.
Global            | booksum                                     | SUM        | En       | Summarize one or more chapters of a book.
Global            | cnewsum                                     | SUM        | Zh       | Summarize a news article.
Global            | clts+                                       | SUM        | Zh       | Summarize a news article.
Global            | open-subtitles-en2zh/open-subtitles-zh2en   | TRAN       | En, Zh   | Translate the movie subtitles.
Global            | news-commentary-en2zh/news-commentary-zh2en | TRAN       | En, Zh   | Translate the movie subtitles.

## Citation

If you find our paper and resources useful, please consider citing our paper:

```bibtex
@misc{kwan_m4le_2023,
  title = {{{M4LE}}: {{A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark}} for {{Large Language Models}}},
  author = {Kwan, Wai-Chung and Zeng, Xingshan and Wang, Yufei and Sun, Yusen and Li, Liangyou and Shang, Lifeng and Liu, Qun and Wong, Kam-Fai},
  year = {2023},
}
```