Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -75,42 +75,38 @@ data_list = load_dataset(path="dwzhu/LongEmbed", name="dataset_name", split="spl
|
|
75 |
|
76 |
#### Evaluation
|
77 |
|
78 |
-
The evaluation of LongEmbed can be easily conducted using MTEB. For the four real tasks, you can evaluate as follows:
|
79 |
|
80 |
```python
|
81 |
-
from tabulate import tabulate
|
82 |
from mteb import MTEB
|
83 |
-
|
84 |
retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
|
85 |
-
|
86 |
-
|
87 |
evaluation = MTEB(tasks=retrieval_task_list)
|
|
|
88 |
results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
|
89 |
-
|
90 |
for key, value in results.items():
|
91 |
split = "test" if "test" in value else "validation"
|
92 |
-
retrieval_task_results.append([key, value[split]["ndcg_at_1"], value[split]["ndcg_at_10"]])
|
93 |
output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
|
94 |
-
|
95 |
-
print(tabulate(retrieval_task_results, headers=["Task", "NDCG@1", "NDCG@10"]))
|
96 |
```
|
97 |
|
98 |
-
For the two synthetic tasks, since we examine a broad context range of
|
99 |
|
100 |
```python
|
101 |
-
from tabulate import tabulate
|
102 |
from mteb import MTEB
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
|
|
|
|
114 |
```
|
115 |
|
116 |
## Task Description
|
@@ -122,7 +118,7 @@ LongEmbed includes 4 real-world retrieval tasks curated from long-form QA and su
|
|
122 |
- [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set.
|
123 |
- [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls).
|
124 |
|
125 |
-
We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of
|
126 |
|
127 |
|
128 |
## Task Statistics
|
|
|
75 |
|
76 |
#### Evaluation
|
77 |
|
78 |
+
The evaluation of LongEmbed can be easily conducted using MTEB (>=1.6.22). For the four real tasks, you can evaluate as follows:
|
79 |
|
80 |
```python
|
|
|
81 |
from mteb import MTEB
|
|
|
82 |
retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
|
83 |
+
output_dict = {}
|
|
|
84 |
evaluation = MTEB(tasks=retrieval_task_list)
|
85 |
+
#TODO load the model before evaluation
|
86 |
results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
|
|
|
87 |
for key, value in results.items():
|
88 |
split = "test" if "test" in value else "validation"
|
|
|
89 |
output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
|
90 |
+
print(output_dict)
|
|
|
91 |
```
|
92 |
|
93 |
+
For the two synthetic tasks, since we examine a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens, an additional parameter of `context_length` is required. You may evaluate as follows:
|
94 |
|
95 |
```python
|
|
|
96 |
from mteb import MTEB
|
97 |
+
needle_passkey_task_list = ["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"]
|
98 |
+
output_dict = {}
|
99 |
+
context_length_list = [256, 512, 1024, 2048, 4096, 8192, 16384, 32768]
|
100 |
+
evaluation = MTEB(tasks=needle_passkey_task_list)
|
101 |
+
#TODO load the model before evaluation
|
102 |
+
results = evaluation.run(model, output_folder=args.output_dir, overwrite_results=True,batch_size=args.batch_size,verbosity=0)
|
103 |
+
for key, value in results.items():
|
104 |
+
needle_passkey_score_list = []
|
105 |
+
for ctx_len in context_length_list:
|
106 |
+
needle_passkey_score_list.append([ctx_len, value[f"test_{ctx_len}"]["ndcg_at_1"]])
|
107 |
+
needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list)])
|
108 |
+
output_dict[key] = {item[0]: item[1] for item in needle_passkey_score_list}
|
109 |
+
print(output_dict)
|
110 |
```
|
111 |
|
112 |
## Task Description
|
|
|
118 |
- [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set.
|
119 |
- [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls).
|
120 |
|
121 |
+
We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents.
|
122 |
|
123 |
|
124 |
## Task Statistics
|