Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Mo Li commited on
Commit
f897b1a
1 Parent(s): d6692ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # Dataset Description
6
+
7
+ ## Dataset Summary
8
+
9
+ The NeedleBench dataset is a part of the OpenCompass project, designed to evaluate the capabilities of large language models (LLMs) in processing and understanding long documents. It includes a series of test scenarios that assess models' abilities in long text information extraction and reasoning. The dataset is structured to support tasks such as single-needle retrieval, multi-needle retrieval, multi-needle reasoning, and ancestral trace challenges.
10
+
11
+ <div style="text-align: center;">
12
+ <img src="https://github.com/user-attachments/assets/b895e0cf-4307-47d8-8e5a-9a4d1c58fa37" alt="Needlebench Overview" width="900" style="margin: auto;">
13
+ </div>
14
+
15
+ ## Supported Tasks and Primary Languages
16
+
17
+ - **Single-Needle Retrieval Task (S-RT)**: Extracting a single key piece of information from a long text.
18
+ - **Multi-Needle Retrieval Task (M-RT)**: Retrieving multiple related pieces of information from long texts.
19
+ - **Multi-Needle Reasoning Task (M-RS)**: Extracting and utilizing multiple key pieces of information for comprehensive understanding.
20
+ - **Ancestral Trace Challenge (ATC)**: Handling multi-layer logical challenges in real long texts.
21
+
22
+ The dataset supports multiple languages, including English and Chinese, as indicated by the presence of files like `multi_needle_reasoning_en.json` and `multi_needle_reasoning_zh.json`.
23
+
24
+
25
+ ## Potential Use Cases
26
+
27
+ The NeedleBench dataset can be used to evaluate and compare the performance of different large language models in tasks involving long text processing, information extraction, and reasoning. It is useful for researchers and developers working on models that need to handle complex queries on extensive documents.
28
+
29
+ ## Evaluation
30
+
31
+ Please follow the provided guidelines in the [OpenCompass documentation](https://opencompass.readthedocs.io/en/latest/advanced_guides/needleinahaystack_eval.html) to set up the environment, configure the dataset, and run evaluations.
32
+
33
+ ## Additional Information
34
+
35
+ For more details on the dataset, please refer to the [NeedleBench Technical Report](https://arxiv.org/abs/2407.11963).
36
+
37
+ ## Contact
38
+
39
+ For any questions or issues related to the dataset, please contact the maintainers or contributors of the [OpenCompass project](https://github.com/open-compass/opencompass).
40
+
41
+
42
+ ## Citation
43
+
44
+ If you use this dataset, please add a reference:
45
+
46
+ ```
47
+ @misc{li2024needlebenchllmsretrievalreasoning,
48
+ title={NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?},
49
+ author={Mo Li and Songyang Zhang and Yunxin Liu and Kai Chen},
50
+ year={2024},
51
+ eprint={2407.11963},
52
+ archivePrefix={arXiv},
53
+ primaryClass={cs.CL},
54
+ url={https://arxiv.org/abs/2407.11963},
55
+ }
56
+ ```