Deepchung commited on
Commit
ea18d0e
1 Parent(s): e8c4357

update readme

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -174,9 +174,9 @@ configs:
174
 
175
  ## Introduction
176
 
177
- **M4LE** is a **M**ulti-ability, **M**ulti-range, **M**ulti-task, bilingual benchmark for long-context evaluation. We categorize long-context understanding into five distinct abilities by considering whether it is required to identify single or multiple spans in long contexts based on explicit or semantic hints. Specifically, these abilities are explicit single-span, semantic single-span, explicit multiple-span, semantic multiple-span, and global. Different from previous long-context benchmark that simply compile from a set of existing long NLP benchmarks, we introduce an automated method to transform short-sequence tasks into a comprehensive long-sequence scenario encompassing all these capabilities.
178
 
179
- M4LE consists of 36 tasks, covering 11 task types and 12 domains. For each task, we construct 200 instances for each context length bucket (1K, 2K, 4K, 6K, 8K, 12K, 16K, 24K, 32K). Due to computation and cost constraints, our paper evaluated 11 well-established LLMs on instances up to the 8K context length bucket. For more details, please refer to the paper available at <https://arxiv.org/abs/2310.19240>. You can also explore the Github page at <https://github.com/KwanWaiChung/M4LE>.
180
 
181
  ## Usage
182
 
@@ -247,7 +247,7 @@ Each testing instance follows this format:
247
 
248
  ## Tasks
249
 
250
- Here is the full list for the tasks with their descriptions. More details about these tasks, please refer to the paper .
251
 
252
  Ability | Task Name | Task Type | Language | Description
253
  ----------------- | ------------------------------------------- | ---------- | -------- | ------------------------------------------------------------------
 
174
 
175
  ## Introduction
176
 
177
+ **M4LE** is a **M**ulti-ability, **M**ulti-range, **M**ulti-task, bilingual benchmark for long-context evaluation. We categorize long-context understanding into five distinct abilities by considering whether it is required to identify single or multiple spans in long contexts based on explicit or semantic hints. Specifically, these abilities are explicit single-span, semantic single-span, explicit multiple-span, semantic multiple-span, and global. Different from previous long-context benchmarks that simply compile from a set of existing long NLP benchmarks, we introduce an automated method to transform short-sequence tasks into a comprehensive long-sequence scenario encompassing all these capabilities.
178
 
179
+ M4LE consists of 36 tasks, covering 11 task types and 12 domains. For each task, we construct 200 instances for each context length bucket (1K, 2K, 4K, 6K, 8K, 12K, 16K, 24K, 32K, 64K, 128K). Due to computation and cost constraints, our paper evaluated 11 well-established LLMs on instances up to the 8K context length bucket. For more details, please refer to the paper available at <https://arxiv.org/abs/2310.19240>. You can also explore the GitHub page at <https://github.com/KwanWaiChung/M4LE>.
180
 
181
  ## Usage
182
 
 
247
 
248
  ## Tasks
249
 
250
+ Here is the full list of the tasks with their descriptions. For more details about these tasks, please refer to the paper.
251
 
252
  Ability | Task Name | Task Type | Language | Description
253
  ----------------- | ------------------------------------------- | ---------- | -------- | ------------------------------------------------------------------